AUTHOR=Chen Na , Hu Xueyan , Zhai Yanan TITLE=Effects of morality and reputation on sharing behaviors in human-robot teams JOURNAL=Frontiers in Psychology VOLUME=14 YEAR=2023 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1280127 DOI=10.3389/fpsyg.2023.1280127 ISSN=1664-1078 ABSTRACT=Introduction

The relationship between robots and humans is becoming increasingly close and will become an inseparable part of work and life with humans and robots working together. Sharing, which involves distributing goods between individuals and others, involves individuals as potential beneficiaries and the possibility of giving up the interests of others. In human teams, individual sharing behaviors are influenced by morality and reputation. However, the impact on individuals’ sharing behaviors in human-robot collaborative teams remains unclear-individuals may consider morality and reputation differently when sharing with robot or human partners. In this study, three experiments were conducted using the dictator game paradigm, aiming to compare the effects and mechanisms of morality and reputation on sharing behaviors in human and human-robot teams.

Methods

Experiment 1 involving 18 participants was conducted. Experiment 2 involving 74 participants was conducted. Experiment 3 involving 128 participants was conducted.

Results

Experiment 1 validated the differences in human sharing behaviors when the agents were robots and humans. Experiment 2 verifies that moral constraints and reputation constraints affect sharing behaviors in human-robot teams. Experiment 3 further reveals the mechanism of differences in sharing behaviors in human-robot teams, where reputation concern plays a mediating role in the impact of moral constraint on sharing behaviors, and the agent type plays a moderating role in the impact of moral constraint on reputation concern and sharing behaviors.

Discussion

The results of this study contribute to a better understanding of the interaction mechanism of human-robot teams. In the future, the formulation of human-robot collaborative team rules and the setting of interaction environments can consider the potential motivation of human behavior from both morality and reputation perspectives and achieve better work performance.