About this Research Topic
The last decades have seen significant progress in the effort to decipher ToM, particularly at the computational level. In cognitive science, experiments with infants and children have started uncovering the basis of an intuitive ToM, while neuroscientific investigations have started revealing the major areas of the brain that are involved in ToM inferences. At the same time, new AI algorithms for inferring human mental states have been proposed with better scalability prospects and more complex applications. During the last several years, this momentum has been particularly fueled by deep learning. Despite these encouraging signs, the prospects of a full understanding of human ToM remain distant, while recent work has highlighted the insufficiency of current machine ToM methods to scale up to arbitrary levels of intelligence and to model the full complexity of human values and intentions. In this Research Topic, we want to address the problem of how ToM (inferring human beliefs, desires, goals, and preferences) can be implemented in machines, potentially drawing inspiration from how humans seem to achieve this. Thus, works making progress towards an understanding of how humans accomplish ToM are welcome.
This Research Topic aims to span across the fields of artificial intelligence, cognitive science, and neuroscience. Its intention is to formulate computational proposals of cognitive science and neuroscience-inspired Theory of Mind, to compare the strengths and limitations of Theory of Mind, Inverse Reinforcement Learning, and other reward specification methods (e. g. learning from preferences), to establish common baselines, metrics, and benchmarks, and to identify open questions. Topics of interest will include but are not limited to theoretical proposals, computational experiments, and case studies of:
• Computational Theory of Mind
• Learning from Demonstrations
• Cognitive Models for Learning from Demonstration and Planning
• Neuroscience-inspired models of Theory of Mind
• Human-Robot Interaction
• Cooperative Inverse Reinforcement Learning (assistance games)
• Learning from Preferences
Keywords: Theory of Mind, Inverse Reinforcement Learning, Reward specification, Human-Machine Interaction, AI ethics, Value learning
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.