AUTHOR=Wullenkord Ricarda , Eyssel Friederike TITLE=Diversity Training With Robots: Perspective-Taking Backfires, While Sterotype-Suppression Decreases Negative Attitudes Towards Robots JOURNAL=Frontiers in Robotics and AI VOLUME=9 YEAR=2022 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2022.728923 DOI=10.3389/frobt.2022.728923 ISSN=2296-9144 ABSTRACT=
The present research investigated the effects of a diversity training intervention on robot-related attitudes to test whether this could help to manage the diversity inherent in hybrid human-robot teams in the work context. Previous research in the human-human context has shown that stereotypes and prejudice, i.e., negative attitudes, may impair productivity and job satisfaction in teams high in diversity (e.g., regarding age, gender, or ethnicity). Relatedly, in hybrid human-robot teams, robots likely represent an “outgroup” to their human co-workers. The latter may have stereotypes towards robots and may hold negative attitudes towards them. Both aspects might have detrimental effects on subjective and objective performance in human-robot interactions (HRI). In an experiment, we tested the effect of an economic and easy to apply diversity training intervention for use in the work context: The so-called enlightenment approach. This approach utilizes perspective-taking to reduce prejudice and discrimination in human-human contexts. We adapted this intervention to the HRI context and explored its impact on participants’ implicit and explicit robot-related attitudes. However, contrary to our predictions, taking the perspective of a robot resulted in more negative robot-related attitudes, whereas actively suppressing stereotypes about social robots and their characteristics produced positive effects on robot attitudes. Therefore, we recommend considering potential pre-existing aversions against taking the perspective of a robot when designing interventions to improve human-robot collaboration at the workplace. Instead, it might be useful to provide information about existing stereotypes and their consequences, thereby making people aware of their potential biases against social robots.