About this Research Topic
In this research topic, we would like to address the interdisciplinary fusion of the knowledge of artificial intelligence, robotics, cognitive development, and neuroscience in spatial cognition and spatial reasoning.
For example, in the fusion area of natural language processing and computer vision, research on vision-and-language navigation (VLN) has been recently implemented. However, there are still few studies applying VLN technology to real robots in the real world. As a future prospect, a VLN that operates in a real environment is required.
Additionally, in robotics AI, it would be useful to refer to the cognitive and neuroscientific findings of concept formation related to place and spatial language acquisition.
To achieve the above, a constructive approach with robots operating in the real world would be effective.
This research topic widely welcomes from fundamental to applied research, which related to spatial reasoning using robots and semantic understanding including language interaction, in the fusion area of artificial intelligence such as machine learning, robotics, and computational neuroscience.
We encourage contributions on a technical basis, e.g., semantic SLAM, place recognition, and, navigation, for performing tasks including spatial movement. Additionally, we look forward to contributions on computational models related to spatial reasoning, such as refer to the hippocampal formation and spatial cognitive capabilities. The focus is also on contributions on cutting-edge machine learning for use in the above.
Keywords: Simultaneous localization and mapping Spatial reasoning Place recognition and categorization Navigation and path planning Spatial language understanding
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.