In recent years, computer vision and Artificial Intelligence (AI) technologies have leveraged efficient and accurate capabilities for tasks such as robot perception, simultaneous location and mapping, environment understanding, and target recognition, propelling robot vision into a separate subject area. Benefitting from advances in hardware's ability to store and process large amounts of data, robots have been able to perceive their surroundings through visual sensors such as cameras, enabling them to recognize, classify and understand objects, terrain and obstacles in the environment. Vision AI technologies enhance scene comprehension by creating accurate 3D models from 2D images. Recent developments, like NeRF and 3D Gaussian Splatting, have expanded possibilities for improving robotics in perception, navigation, mapping, and decision-making.
However, theoretical advances and efficient and reliable applications in the industry of robot vision technology still encounter challenges. Examples include solving the problem of weak perception and adaptive ability of robots in complex environments such as weak texture, lighting and weather changes, occlusion, and dynamic objects; making proposals on how to mitigate the accuracy degradation caused by missing visual data in long-time navigation of robots and suggesting feasible retrieval solutions; addressing the shortcomings of 3D environment reconstruction techniques regarding memory occupation, time cost, and quality, especially combining with technical advances in the computer vision to achieve efficient and high-precision reconstruction; as well as to address the limitations of robots in these crucial scenarios such as underwater, disaster areas, etc. due to the absence of data. We look forward to significant advancements in these areas and invite researchers to share their discoveries and insights in robotic vision and AI research.
- Novel visual data types and processing methods
- Simultaneous location and mapping
- Robot intelligence and learning
- Artificial Intelligence and Pattern Recognition
- Signal and image processing
- 3D measurement and reconstruction
- Robot autonomous navigation and control
- Robot human-robot interaction
- Robot perception and data fusion
- 3D target detection and tracking
- Scene semantic segmentation and classification
- Real-time novel view rendering
- Efficient and low-cost vision algorithms and architectures
- Robot vision algorithms in unstructured environments
- Generative models and applications in robot vision
Keywords:
Robot Vision, Environmental Perception, Location, 3D Reconstruction, Artificial Intelligence, Data Processing, Data Generation
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
In recent years, computer vision and Artificial Intelligence (AI) technologies have leveraged efficient and accurate capabilities for tasks such as robot perception, simultaneous location and mapping, environment understanding, and target recognition, propelling robot vision into a separate subject area. Benefitting from advances in hardware's ability to store and process large amounts of data, robots have been able to perceive their surroundings through visual sensors such as cameras, enabling them to recognize, classify and understand objects, terrain and obstacles in the environment. Vision AI technologies enhance scene comprehension by creating accurate 3D models from 2D images. Recent developments, like NeRF and 3D Gaussian Splatting, have expanded possibilities for improving robotics in perception, navigation, mapping, and decision-making.
However, theoretical advances and efficient and reliable applications in the industry of robot vision technology still encounter challenges. Examples include solving the problem of weak perception and adaptive ability of robots in complex environments such as weak texture, lighting and weather changes, occlusion, and dynamic objects; making proposals on how to mitigate the accuracy degradation caused by missing visual data in long-time navigation of robots and suggesting feasible retrieval solutions; addressing the shortcomings of 3D environment reconstruction techniques regarding memory occupation, time cost, and quality, especially combining with technical advances in the computer vision to achieve efficient and high-precision reconstruction; as well as to address the limitations of robots in these crucial scenarios such as underwater, disaster areas, etc. due to the absence of data. We look forward to significant advancements in these areas and invite researchers to share their discoveries and insights in robotic vision and AI research.
- Novel visual data types and processing methods
- Simultaneous location and mapping
- Robot intelligence and learning
- Artificial Intelligence and Pattern Recognition
- Signal and image processing
- 3D measurement and reconstruction
- Robot autonomous navigation and control
- Robot human-robot interaction
- Robot perception and data fusion
- 3D target detection and tracking
- Scene semantic segmentation and classification
- Real-time novel view rendering
- Efficient and low-cost vision algorithms and architectures
- Robot vision algorithms in unstructured environments
- Generative models and applications in robot vision
Keywords:
Robot Vision, Environmental Perception, Location, 3D Reconstruction, Artificial Intelligence, Data Processing, Data Generation
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.