About this Research Topic
Given the state of the art in psychology and neuroscience, there are also at least two very different intuitions that one might have:
On the one hand it has been well known for decades from psychological experiments that people tend to interpret even simple moving shapes in terms of more or less human-like actions and intentions. So the first intuition could be that this should also apply to robots and other autonomous systems.
On the other hand, much (social) neuroscience research in the last 10-20 years, not least the discovery of the so-called mirror (neuron) system, also points to the importance of embodiment and morphological differences, which might lead to the intuition that humans might be able to more or less easily understand the behaviour of very human-like robots, but not necessarily the behaviour of, for example, autonomous lawnmowers or automated vehicles.
To what degree, and how precisely, each of these mechanisms might be involved when interacting with artificial agents remains unknown. It may, for instance, depend at least in part on the human perception of the agent: previous research has shown that humans adapt their behaviour according to their beliefs of the cognitive abilities of another (even artificial) agent and we have previously suggested that such agents need to be understood in terms of how socially interactive they are, and how tool-like their purpose is.
Conversely, the same insights and intuitions are also relevant for robot recognition of human intentions, which is a arguably a prerequisite for pro-social behaviour, and necessary to engage in, for instance, instrumental helping or mutual collaboration. To develop robots that can interact naturally and effectively with people therefore requires the creation of systems that can perceive and comprehend intentions in other agents.
For research on human interaction with artificial agents such as robots in general, and mutual action/intention recognition in particular, it is therefore important to be clear about the theoretical framework(s) and inherent assumptions underlying technological implementations. This has further ramifications for the evaluation of the quality of the interaction (as opposed to the functioning of the robot itself) between humans and robots.
The purpose of this interdisciplinary research topic is therefore to bring together researchers that work on the diverse aspects of human interaction with artificial agent, thus providing a forum to unite the different strands. In particular, this research topic builds on two workshops on the same theme, one held at HRI 2016 (http://intentions.xyz/hri-2016-workshop/) and the other held at RO-MAN 2016 (http://intentions.xyz/roman-2016-workshop/).
Keywords: Intention recognition, Intention communication, Human-robot interaction, Social robotics, Theory of mind, Mirror mechanisms, Social neuroscience, Social cognition, Embodied cognition, Embodied social interaction
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.