Skip to main content

EDITORIAL article

Front. Psychol., 14 March 2024
Sec. Cognitive Science
This article is part of the Research Topic Neurocognitive Features of Human-Robot and Human-Machine Interaction View all 6 articles

Editorial: Neurocognitive features of human-robot and human-machine interaction

  • 1Department of Information Engineering, University of Pisa, Pisa, Italy
  • 2Department of Psychology, University of Milano Bicocca, Milan, Italy
  • 3ETIS Laboratory, CY Cergy Paris University, ENSEA, CNRS, Cergy-Pontoise, France

The growing integration of machines and robots in different contexts of our daily lives is becoming increasingly apparent. Despite notable advancements, the capability of machines and robots to engage humans naturally and intuitively remains inadequate. Therefore, this Research Topic investigates the “human factor” while interacting with artificial agents from both behavioral and neuroscientific perspectives. We gathered studies analyzing humans interacting with different types of machines encompassing physical robots, computer interfaces, conversational agents (CA) and brain-computer interfaces (BCI) settings.

Regarding Human-Robot Interactions (HRI), one crucial question is to what extent robots are spontaneously perceived and treated as intentional agents. In their work, Morillo-Mendez et al. addressed it by adopting a neurocognitive-inspired approach (Wiese et al., 2017). The authors analyzed to what extent humans attribute intentionality toward a robot by testing the gaze-following behavior. Results showed that participants followed the gaze of the robot when they believed the NAO robot could potentially see the target. Interestingly, the gaze-following behavior was attenuated when, according to the participants' point of view, the NAO robot could not see the target. The study by Morillo-Mendez et al. suggests that gaze following with an anthropomorphic robot occurs if humans can attribute a mental state toward it. This study is embedded in a new series of studies aiming to use robots to better understand human behaviors (Wykowska, 2020). The work by Mondellini et al. is an example of such an approach. The authors adopted an ecological approach to evaluate, in HRI, differences in human performance by comparing adults with autistic syndrome condition (ASC) and neurotypical ones (NT). They analyzed both qualitative and quantitative behavioral patterns during an assembly task in an industry-like lab-based robotic collaborative cell. Quantitative results showed that most of the participants with ASC displayed longer waiting times for the robot compared to NT, suggesting a lower sense of urgency in attending to the robot. Also, NT participants spent more time looking at the robot compared to the participants with ASC. Taken together, these results highlight how individual differences are crucial factors in the design of cobots. Exploring these aspects provides valuable insights into how individuals with different needs might cope with HRI.

When considering virtual artificial agents, the field of Human-Computer Interaction (HCI) has seen a growing interest related to the interaction with conversational agents (CA) and how they can influence the user experience (Moussawi et al., 2021). Poivet et al. developed a computer textual game in which participants endorse the role of a detective and have to discuss with four different CAs which could have different roles (witness or suspect) and communication styles (aggressive vs. cooperative). The role and communication style influenced participants' evaluation of the CA in terms of intelligence and believability. Specifically, CAs' communication styles played a crucial role in shaping participants' perception of aggressiveness and warmth and influenced also participants' behaviors. The authors conclude that aligning the roles of CAs with their communication styles has the potential to significantly improve users' experience and that taking into account parameters such as CA's roles, communication styles, and user expectations can enhance the immersive and interactive nature of the narrative.

Today, AI is also extensively adopted in automated journalism for writing and broadcasting (Heiselberg et al., 2022). Yet, there has been limited systematic research on the variances in brain activation between human and artificial voices during newscasts. The study by Gong explored the psychophysiological effects of media in Chinese settings. Gong compared the electroencephalography (EEG) signals of adults while exposed to various newscast agents (Human or AI). Results showed that greater β band activity was observed in the left posterior temporal lobe when participants listened to a human newscast compared to an AI voice newscast, indicating that participants' brains exhibited enhanced processing when listening to a human reporter rather than an AI-generated voice.

Another important application of HCI is the brain-computer interface (BCI), i.e., a direct communication pathway between the brain and an external device or computer system (Vaid et al., 2015). BCIs allow for the exchange of information between the brain and external devices, typically by analyzing, classifying and decoding EEG signals. The steady-state visual evoked potential (SSVEP) is a type of brain signal that occurs in response to repetitive visual stimuli presented at a constant frequency and it is crucial in the implementation of BCI systems. A significant challenge with SSVEP is that most approaches do not effectively handle non-stationary and time-varying EEG signals. In their study, Yan et al. addressed this issue by introducing an unsupervised adaptive classification algorithm founded on the self-similarity of signals at the same frequency. The proposed algorithm ensures continuous updating of template signals as new EEG data is inputted, which is crucial in real-time BCI. The authors showcased the efficacy of leveraging the self-similarity of signals at the same frequency within the adaptive classification algorithm.

In conclusion, critical advancements were recently made in understanding different aspects of the human factor in interaction with several types of artificial agents, shedding light on the benefit of adopting an interdisciplinary approach in the broad field of social interactions with artificial agents. Nevertheless, these advancements opened new questions that will need to be answered in future research, given the paramount role that AI and robots are taking in our daily lives.

Author contributions

FB: Writing – review & editing, Writing – original draft. FC: Writing – review & editing, Writing – original draft. GM: Writing – review & editing, Writing – original draft.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Heiselberg, L., Blom, J., and van Dalen, A. (2022). Automated news reading in the neural age: audience reception and perceived credibility of a news broadcast read by a neural voice. Journal. Stud. 23, 896–914. doi: 10.1080/1461670X.2022.2052346

Crossref Full Text | Google Scholar

Moussawi, S., Koufaris, M., and Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electron. Mark. 31, 343–364. doi: 10.1007/s12525-020-00411-w

Crossref Full Text | Google Scholar

Vaid, S., Singh, P., and Kaur, C. (2015). “EEG signal analysis for BCI interface: a review,” in 2015 Fifth International Conference on Advanced Computing & Communication Technologies (Haryana: IEEE), 143–147. doi: 10.1109/ACCT.2015.72

Crossref Full Text | Google Scholar

Wiese, E., Metta, G., and Wykowska, A. (2017). Robots as intentional agents: using neuroscientific methods to make robots appear more social. Front. Psychol. 8:1663. doi: 10.3389/fpsyg.2017.01663

PubMed Abstract | Crossref Full Text | Google Scholar

Wykowska, A. (2020). Social robots to test flexibility of human social cognition. Int. J. Soc. Robot. 12, 1203–1211. doi: 10.1007/s12369-020-00674-5

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: EEG, Human-Robot Interaction (HRI), Human-Computer Interaction (HCI), conversational agent (CA), brain computer interface (BCI), autism spectrum conditions (ASC), steady-state visual evoked potential (SSVEP)

Citation: Bossi F, Ciardo F and Mostafaoui G (2024) Editorial: Neurocognitive features of human-robot and human-machine interaction. Front. Psychol. 15:1394970. doi: 10.3389/fpsyg.2024.1394970

Received: 02 March 2024; Accepted: 05 March 2024;
Published: 14 March 2024.

Edited and reviewed by: Eddy J. Davelaar, University of London, United Kingdom

Copyright © 2024 Bossi, Ciardo and Mostafaoui. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Francesco Bossi, ZnJhbmNlc2NvLmJvc3NpJiN4MDAwNDA7aW5nLnVuaXBpLml0

These authors have contributed equally to this work and share first authorship

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.