Theory of Mind (ToM) - the ability of the human mind to attribute mental states to others - is a key component of human cognition. ToM encompasses inferring others’ beliefs, desires, goals, and preferences. How humans can perform ToM is still an unresolved fundamental scientific problem. Furthermore, for a true understanding of ToM in humans, progress is required at multiple levels of analysis: computational, algorithmic, and physical. The same capability of inferring human mental states is a prerequisite for artificial intelligence (AI) to be integrated into human society. Autonomous cars, for example, will need to be able to infer the mental states of human drivers and pedestrians to predict their behavior. As AI becomes more powerful and pervasive its ability to infer human goals, desires, and intentions, even in ambiguous or new situations, will become ever more important.
The last decades have seen significant progress in the effort to decipher ToM, particularly at the computational level. In cognitive science, experiments with infants and children have started uncovering the basis of an intuitive ToM, while neuroscientific investigations have started revealing the major areas of the brain that are involved in ToM inferences. At the same time, new AI algorithms for inferring human mental states have been proposed with better scalability prospects and more complex applications. During the last several years, this momentum has been particularly fueled by deep learning. Despite these encouraging signs, the prospects of a full understanding of human ToM remain distant, while recent work has highlighted the insufficiency of current machine ToM methods to scale up to arbitrary levels of intelligence and to model the full complexity of human values and intentions. In this Research Topic, we want to address the problem of how ToM (inferring human beliefs, desires, goals, and preferences) can be implemented in machines, potentially drawing inspiration from how humans seem to achieve this. Thus, works making progress towards an understanding of how humans accomplish ToM are welcome.
This Research Topic aims to span across the fields of artificial intelligence, cognitive science, and neuroscience. Its intention is to formulate computational proposals of cognitive science and neuroscience-inspired Theory of Mind, to compare the strengths and limitations of Theory of Mind, Inverse Reinforcement Learning, and other reward specification methods (e. g. learning from preferences), to establish common baselines, metrics, and benchmarks, and to identify open questions. Topics of interest will include but are not limited to theoretical proposals, computational experiments, and case studies of:
• Computational Theory of Mind
• Learning from Demonstrations
• Cognitive Models for Learning from Demonstration and Planning
• Neuroscience-inspired models of Theory of Mind
• Human-Robot Interaction
• Cooperative Inverse Reinforcement Learning (assistance games)
• Learning from Preferences
Theory of Mind (ToM) - the ability of the human mind to attribute mental states to others - is a key component of human cognition. ToM encompasses inferring others’ beliefs, desires, goals, and preferences. How humans can perform ToM is still an unresolved fundamental scientific problem. Furthermore, for a true understanding of ToM in humans, progress is required at multiple levels of analysis: computational, algorithmic, and physical. The same capability of inferring human mental states is a prerequisite for artificial intelligence (AI) to be integrated into human society. Autonomous cars, for example, will need to be able to infer the mental states of human drivers and pedestrians to predict their behavior. As AI becomes more powerful and pervasive its ability to infer human goals, desires, and intentions, even in ambiguous or new situations, will become ever more important.
The last decades have seen significant progress in the effort to decipher ToM, particularly at the computational level. In cognitive science, experiments with infants and children have started uncovering the basis of an intuitive ToM, while neuroscientific investigations have started revealing the major areas of the brain that are involved in ToM inferences. At the same time, new AI algorithms for inferring human mental states have been proposed with better scalability prospects and more complex applications. During the last several years, this momentum has been particularly fueled by deep learning. Despite these encouraging signs, the prospects of a full understanding of human ToM remain distant, while recent work has highlighted the insufficiency of current machine ToM methods to scale up to arbitrary levels of intelligence and to model the full complexity of human values and intentions. In this Research Topic, we want to address the problem of how ToM (inferring human beliefs, desires, goals, and preferences) can be implemented in machines, potentially drawing inspiration from how humans seem to achieve this. Thus, works making progress towards an understanding of how humans accomplish ToM are welcome.
This Research Topic aims to span across the fields of artificial intelligence, cognitive science, and neuroscience. Its intention is to formulate computational proposals of cognitive science and neuroscience-inspired Theory of Mind, to compare the strengths and limitations of Theory of Mind, Inverse Reinforcement Learning, and other reward specification methods (e. g. learning from preferences), to establish common baselines, metrics, and benchmarks, and to identify open questions. Topics of interest will include but are not limited to theoretical proposals, computational experiments, and case studies of:
• Computational Theory of Mind
• Learning from Demonstrations
• Cognitive Models for Learning from Demonstration and Planning
• Neuroscience-inspired models of Theory of Mind
• Human-Robot Interaction
• Cooperative Inverse Reinforcement Learning (assistance games)
• Learning from Preferences