Editorial on the Research Topic
Intelligent control and applications for robotics, volume II
1. Introduction
Robotic technologies have undergone decades of the development and has now entered a highly intelligent stage, which is widely applied in various fields, including production and manufacturing, medical care, education, service Industries (Liu et al., 2020; Omisore et al., 2020). At present, the global robot market has exceeded $100 billion and is growing at a rate of over 17% annually. Among them, the Asia Pacific market is in an absolute leading position, with an estimated expenditure of 133 billion US dollars in 2020, accounting for 71% of the global market.
According to their functions and application fields, the global robot market can be divided into three categories: industrial robots, service robots, and special robots (Yang et al., 2021; Keroglou et al., 2023). Currently, the robotic technologies are being developed with the deep learning and artificial intelligence, where the deep learning technology can enable robots to obtain more accurate artificial intelligence so as to simulate human behaviors.
2. Analysis of the Research Topic
“Perception-Decision-Action” is the fundamental framework of the robots. In the perception phase, robots can perceive the environmental information via various sensors such as cameras, LiDAR, and inertial measurement units (IMUs). High-quality perception is crucial for the safe & efficient operation of the robots. The microelectromechanical system (MEMS) IMUs are widely used for the self-localization in the autonomous robots due to their small size and low power consumption. However, they are susceptible to random noise and bias errors, leading to lower measurement accuracy. Liu et al. conducted research on a low-cost MEMS IMU denoising method based on the deep learning. They proposed a hybrid denoising network which can combine the Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) to eliminate the random noise in the raw data and calibrate IMU errors.
With the rapid advancement of the artificial intelligence technologies, vision has become one of the primary modalities for the robot perception (Yang et al., 2020). Robots can use image recognition techniques to extract positional and semantic information from the ambient environment for further decision-making. In practical application scenarios, complex and ambiguous background environments could lead to missed and false detections of small targets. Pei et al. improved the perceptual capability of YOLOv5 for small targets by enhancing input image resolution and retaining more feature information. In addition to the target identification and localization, in some scenarios, robots require to extract more complex semantic information from the visual data. Yang et al. utilized the human joint data, including joint positions, bone vectors, joint motion and bone motion data, to predict the human actions via a multi-scale attention spatiotemporal graph convolutional network. Just as humans can observe and identify objects from different perspectives, robots' active object recognition involves identifying targets through images captured from different viewpoints. Sun et al. developed a sampling strategy and training method for the viewpoint management during the robot perspective transformation process, which can help to determine the optimal planning for the active object recognition.
Planning and decision-making are crucial for the autonomy of the robot systems. The Monte Carlo Tree Search algorithm (MCTS) is a probabilistic search algorithm widely used in the decision-making and path planning problems. MCTS, due to its extensive random searches, is inherently inefficient when addressing individual problems. Li W. et al. introduced a self-learning MCTS (SL-MCTS) by combining MCTS with a dual-branch neural network. Compared with the traditional MCTS, the SL-MCTS is capable of finding better solutions with fewer iterations, significantly enhancing the search efficiency and quality of the path planning tasks. Zhang et al. applied the MCTS in the autonomous decision-making tasks in the aerial combat and incorporated deep reinforcement learning (DRL) to guide the MCTS searching for maneuvers in continuous action spaces, without relying on human knowledge to assist agents in the decision-making.
While DRL demonstrates outstanding performance in the planning and decision-making tasks due to its powerful self-learning capabilities, it might lead to a waste of computational resources when pursuing the maximization of long-term returns in atypical Markov decision processes. Additionally, errors in value function estimation can result in suboptimal policies. Pan et al. addressed these limitations in the atypical MDPs via the average reward method to form an unbiased, low-variance target Q-value with a simplified network architecture. Their approach showed significant advantages in terms of learning efficiency, effective control and computational resource utilization compared to the current methods. Meanwhile, Li S. et al. discussed the estimation bias issue, suggesting that a trade-off between the underestimation and overestimation can enhance DRL sample efficiency. They also introduced an Actor-Critic framework, which can learn values and policies within the same network and balances between the underestimation and overestimation.
In the case of multi-agent systems, besides individual autonomous decision-making, the coordinated actions among agents can improve the efficiency of the entire system. Hu et al. explored the problem of minimizing transportation costs in an automated guided vehicle cluster. They employed a hierarchical planning approach to decompose the integrated problem into an upper-level task allocation problem and a lower-level path planning problem. Hence the sum of the request cost and conflict delay cost of the entire system can be minimized via a hybrid discrete state transition algorithm based on the elite solution sets and a taboo list method.
In the execution phase, the robots heavily rely on the control algorithms to perform specific actions (Zheng et al., 2021). Tasks that involve extreme environmental conditions, high workloads and complex operational procedures demand the robots to have a particularly high-level of control precision. Zhao et al. have developed a variable damping controller for the end-effector of a space station robotic arm. With the reinforcement learning to control the variable damping of the robot limb, the resistance of the arm to the disturbance can be greatly enhanced.
3. Discussion and conclusion
With the continuous development of mobile robots, technologies such as multi-sensor fusion, control systems, and intelligent software are constantly being upgraded to meet the demands of more application scenarios. According to the statistics analysis, the mobile robot market will be expected to exceed $46 billion by 2025. The Mobile robots have been applied in various fields such as manufacturing, healthcare and military, with enormous potential and unlimited future development space, where the most cutting-edge technologies in the latest robotics field includes: flexible robot technology, liquid metal control technology, electromyography control technology, autonomous driving technology, virtual reality (VR) robot technology, photogenetic technology, brain computer interface (BCI) technology, machine learning (ML) technology, natural language processing (NLP) technology and blockchain technology. These technologies allows a wider range of application scenarios for the development of robots.
Overall, these technologies cover multiple aspects of the robotics field, from robot perception, decision-making to execution, from autonomous learning to interaction with humans, from single perception mode to multimodal perception, from hardware to software, from single decision-making to multi-task collaboration etc. These technologies have all driven the development of the robotics field. Some technologies have been fully developed, such as robot vision technology and robot grasping technology, while others are still rapidly developing, such as robot voice technology and robot navigation technology. Regardless of the technologies, they provide more potentials for the application of the robots and are constantly changing our way of life and work. At the same time, however, robots will also bring new challenges and issues, such as robot ethics, robot laws, robot safety, etc. These issues require us to jointly explore and solve to ensure the health and sustainable development of the robots.
Author contributions
YZ: Writing—original draft, Writing—review & editing.
Funding
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was partially supported by the National Natural Science Foundation of China Refs. U1913201 and 61973296, in part by the Guangdong Basic and Applied Basic Research Foundation Ref. 2021B1515120038.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Keroglou, C., Kansizoglou, I., Michailidis, P., Oikonomou, K. M., Papapetros I. T., Dragkola, P., et al. (2023). A survey on technical challenges of assistive robotics for elder people in domestic environments: the ASPiDA concept. IEEE Trans. Med. Robot. Bionics 5, 196–205. doi: 10.1109/TMRB.2023.3261342
Liu, Y., Ma, X., Shu, L., Hancke, G. P., and Abu-Mahfouz, A. M. (2020). From Industry 4.0 to Agriculture 4.0: current status, enabling technologies, and research challenges. IEEE Trans. Ind. Inform. 17, 4322–4334. doi: 10.1109/TII.2020.3003910
Omisore, O. M., Han, S., Xiong, J., Li, H., Li, Z., and Wang, L. (2020). A review on flexible robotic systems for minimally invasive surgery. IEEE Trans. Syst. Man Cybern. Syst. 52, 631–644. doi: 10.1109/TSMC.2020.3026174
Yang, C., Zhu, Y., and Chen, Y. A. (2021). A review of human–machine cooperation in the robotics domain. IEEE Trans. Hum. Mach. Syst. 52, 12–25. doi: 10.1109/THMS.2021.3131684
Yang, J., Wang, C., Jiang, B., Song, H., and Meng, Q. (2020). Visual perception enabled industry intelligence: state of the art, challenges and prospects. IEEE Trans. Ind. Inform. 17, 2204–2219. doi: 10.1109/TII.2020.2998818
Keywords: robotics, perception, decision-making, autonomous control, applications
Citation: Zhou Y (2023) Editorial: Intelligent control and applications for robotics, volume II. Front. Neurorobot. 17:1282982. doi: 10.3389/fnbot.2023.1282982
Received: 25 August 2023; Accepted: 23 October 2023;
Published: 31 October 2023.
Edited and reviewed by: Florian Röhrbein, Technische Universität Chemnitz, Germany
Copyright © 2023 Zhou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Yimin Zhou, ym.zhou@siat.ac.cn