About this Research Topic
This Research Topic is intended to provide an overview of the research being carried out in both the areas of NLP and CV to allow robots to learn and improve their capabilities for exploring, modeling, and learning about the physical world. As this integration requires an interdisciplinary attitude, the Research Topic aims to gather researchers with broad expertise in various fields — machine learning, computer vision, natural language, neuroscience, and psychology — to discuss their cutting edge work as well as perspectives on future directions in this exciting space of language, vision and interactions in robots.
The interests of this topic are focused (but not limited to) to address the following problems:
i) how to jointly represent verbal and visual information into a robotic system;
ii) how to learn and progressively improve communicative and multimodal skills, interactively or autonomously;
iii) how to answer questions also integrating visual stimuli;
iv) how to detect sentiments and emotions both using language, gestures, poses, movements and facial expressions;
v) how to efficiently perform on-robot NLP and CV without losing the quality of models run on servers;
vi) how to perform AI system hardware/software co-design in robots;
vii) how to enable cooperation mechanisms among robots to integrate complementary multimodal skills;
viii) how to evaluate the quality of the human-robot interactions.
Original contributions addressing these issues are sought, covering the whole range of theoretical and practical aspects, technologies and systems.
Keywords: Computer Vision, Natural Language Processing, On-device AI, Robotics, Human-Robot Interaction
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.