AUTHOR=Morillo-Mendez Lucas , Stower Rebecca , Sleat Alex , Schreiter Tim , Leite Iolanda , Mozos Oscar Martinez , Schrooten Martien G. S. TITLE=Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution JOURNAL=Frontiers in Psychology VOLUME=14 YEAR=2023 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1215771 DOI=10.3389/fpsyg.2023.1215771 ISSN=1664-1078 ABSTRACT=

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.