Skip to main content

EDITORIAL article

Front. Robot. AI, 12 July 2021
Sec. Human-Robot Interaction
This article is part of the Research Topic Artificial Intelligence and Human Movement in Industries and Creation View all 7 articles

Editorial: Artificial Intelligence and Human Movement in Industries and Creation

  • 1Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
  • 2Centre for Robotics, MINES ParisTech, PSL Université Paris, Paris, France
  • 3Department of Computing, Goldsmiths University of London, London, United Kingdom
  • 4Idiap Research Institute, Martigny, Switzerland

Recent advances in human motion sensing technologies and machine learning have enhanced the potential of Artificial Intelligence to improve our quality of life, increase productivity and reshape multiple industries, including cultural and creative industries. In order to achieve this goal, humans must remain at the center of Artificial Intelligence and AI should learn from humans and collaborate effectively with them. Human-Centred Artificial Intelligence (HAI) is expected to create new opportunities and challenges in the future, which cannot yet be foreseen. Any type of programmable entity (e.g., robots, computers, autonomous vehicles, drones, Internet of Things, etc.) will have different layers of perception and sophisticated HAI algorithms that will detect human intentions and behaviors (Psaltis et al., 2017) and learn continuously from them. Thus, every single intelligent system will be able to capture human motions, analyze them (Zhang et al., 2019), detect poses and recognize gestures (Chatzis et al., 2020; Stergioulas et al., 2021) and activities (Papastratis et al., 2020; Papastratis et al., 2021; Konstantinidis et al., 2021), including facial expressions and gaze (Bek et al., 2020), enabling natural collaboration with humans.

Different sensing technologies, such as optical Mocap systems, wearable inertial sensors, RGB or depth cameras and other modality type sensors, are employed for capturing human movement in the scene and transforming this information into a digital representation. Most of the researchers usually focus on the use of a single modality sensor - due to the simplicity and low cost of the final system - and the design of either conventional machine learning algorithms or complex deep learning network architectures for analyzing human motion data (Konstantinidis et al., 2018; Konstantinidis et al., 2020). Such cost-effective approaches have been applied to a wide range of application domains, including entertainment (Kaza et al., 2016; Baker, 2020), health (Dias et al.; Konstantinidis et al., 2021), education (Psaltis et al., 2017; Stefanidis et al., 2019), sports (Tisserand et al., 2017), robotics (Jaquier et al., 2020; Gao et al., 2021), art and cultural heritage (Dimitropoulos et al., 2018), showing the great potential of AI technology.

Based on the aforementioned, it is evident that HAI is currently at the center of scientific debates and technological exhibitions. Developing and deploying intelligent machines is definitely both an economic challenge (e.g., flexibility, simplification, ergonomy) as well as a societal challenge (e.g., safety, transparency), not only from a factory perspective, but also for the real-world in general. The papers in this Research Topic adopt different sensing technologies, such as depth sensors, inertial suits, IMU sensors and force-sensing resistors (FSRs) to capture human movement, while they present diverse approaches for modeling the temporal data.

More specifically, Sakr et al. investigate the feasibility of employing FSRs worn on the arm to measure the Force Myography (FMG) signals for isometric force/torque estimation. A two-stage regression strategy is employed to enhance the performance of the FMG bands, where three regression algorithms including general regression neural network (GRNN), support vector regression (SVR), and random forest regression (RF) models are used, respectively, in the first stage, while GRNN is used in the second stage. Two cases are considered to explore the performance of the FMG bands in estimating: (a) 3-DoF force and 3-DoF torque at once and (b) 6-DoF force and torque. In addition, the impact of sensor placement and the spatial coverage of FMG measurements is studied.

Manitsaris et al. propose a multivariate time series approach for the recognition of professional gestures and for the forecasting of their trajectories. More specifically, the authors introduce a gesture operational model, which describes how gestures are performed based on assumptions that focus on the dynamic association of body entities, their synergies, and their serial and non-serial mediations, as well as their transitioning over time from one state to another. The assumptions of this model are then translated into an equation system for each body entity through State-Space modeling. The proposed method is evaluated on four industrial datasets that contain gestures, commands and actions.

A comprehensive review on machine learning approaches for motor learning is presented by Caramiaux et al. The review outlines existing machine learning models for motor learning and their adaptation capabilities and identifies three types of adaptation: Parameter adaptation in probabilistic models, transfer and meta-learning in deep neural networks, and planning adaptation by reinforcement learning.

Dias et al. present an innovative and personalized motor assessment tool capable of monitoring and tracking the behavioral change of Parkinson’s disease (PD) patients (mostly related to posture, walking/gait, agility, balance, and coordination impairments). The proposed assessment tool is part of the i-Prognosis Game Suit, which was developed within the framework of the i-Prognosis EU funded project (www.i-prognosis.eu). Six different motor assessments tests integrated in the iPrognosis Games have been designed and developed based on the UPDRS Part III examination. The efficiency of the proposed assessment tests to reflect the motor skills status, similarly to the UPDRS Part III items, is validated via 27 participants with early and moderate PD.

Bikias et al. explore the use of IMU sensors for the detection of Freezing-of-Gait (FoG) Episodes in Parkinson’s disease Patients and present a novel deep learning method. The study investigates the feasibility of a single wrist-based inertial measurement unit (IMU) for effectively predicting FoG events. The proposed method, namely, DeepFoG, aims at facilitating the real-time detection of FoG episodes. DeepFoG is based on the training of a deep learning model that automatically detects FoG events and differentiates them from stops and walking with turns. DeepFoG, utilizing a single-arm sensor has the potential to achieve similar accuracy as previously published methods, but with fewer sensors. The main advantage offered by the proposed methodology is its simplification and convenience attributed to the use of a single smartwatch rather than its improved accuracy.

The approaches discussed in this Research Topic offer readers a wide range of valuable paradigms that promote the use of AI and Human Movement Analysis in different application domains and at the same time provide rich material for scientific thinking.

Author Contributions

KD and SM wrote the first draft. All authors contributed to manuscript revision.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Baker, T. (2020). The History of Motion Capture within the Entertainment Industry. Helsinki, Finland: Metropolia University of Applied Sciences (Thesis). doi:10.1109/vr46266.2020.00102

CrossRef Full Text

Bek, J., Poliakoff, E., and Lander, K. (2020). Measuring Emotion Recognition by People with Parkinson’s Disease Using Eye-Tracking with Dynamic Facial Expressions. J. Neurosci. Methods 331, 108524. doi:10.1016/j.jneumeth.2019.108524

CrossRef Full Text | Google Scholar

Chatzis, T., Stergioulas, A., Konstantinidis, D., Dimitropoulos, K., and Daras, P. (2020). A Comprehensive Study on Deep Learning-Based 3D Hand Pose Estimation Methods. Appl. Sci. 10 (19), 6850. doi:10.3390/app10196850

CrossRef Full Text | Google Scholar

Dimitropoulos, K., Tsalakanidou, F., Nikolopoulos, S., Kompatsiaris, I., Grammalidis, N., Manitsaris, S., et al. (2018). A Multimodal Approach for the Safeguarding and Transmission of Intangible Cultural Heritage: The Case of I-Treasures. IEEE Intell. Syst. 33 (6), 3–16. doi:10.1109/mis.2018.111144858

CrossRef Full Text | Google Scholar

Gao, X., Silvério, J., Pignat, E., Calinon, S., Li, M., and Xiao, X. (2021). Motion Mappings for Continuous Bilateral Teleoperation. IEEE Robotics Automation Lett. 6 (3), 5048–5055. doi:10.1109/LRA.2021.3068924

CrossRef Full Text | Google Scholar

Jaquier, N., Ginsbourger, D., and Calinon, S. (2020). “Learning from Demonstration with Model-Based Gaussian Process,” in Conference on Robot Learning (PMLR). Proceedings of Machine Learning Research Location, 247–257.

Google Scholar

Kaza, K., Psaltis, A., Stefanidis, K., Apostolakis, K. C., Thermos, S., Dimitropoulos, K., et al. (2016). “Body Motion Analysis for Emotion Recognition in Serious Games,” in International Conference on Universal Access in Human-Computer Interaction. Cham: Springer, 33–42. doi:10.1007/978-3-319-40244-4_4

CrossRef Full Text | Google Scholar

Konstantinidis, D., Dimitropoulos, K., and Daras, P. (2018). “A Deep Learning Approach for Analyzing Video and Skeletal Features in Sign Language Recognition,” in 2018 IEEE International Conference on Imaging Systems and Techniques (IST) (IEEE), Krakow, Poland, 1–6.

CrossRef Full Text | Google Scholar

Konstantinidis, D., Dimitropoulos, K., and Daras, P. (2021). “Towards Real-Time Generalized Ergonomic Risk Assessment for the Prevention of Musculoskeletal Disorders,” in 14th ACM International Conference on Pervasive Technologies Related to Assistive Environments Conference, Corfu, Greece: Association for Computing Machinery.

Google Scholar

Konstantinidis, D., Dimitropoulos, K., Langlet, B., Daras, P., and Ioakimidis, I. (2020). Validation of a Deep Learning System for the Full Automation of Bite and Meal Duration Analysis of Experimental Meal Videos. Nutrients 12 (1), 209. doi:10.3390/nu12010209

PubMed Abstract | CrossRef Full Text | Google Scholar

Papastratis, I., Dimitropoulos, K., and Daras, P. (2021). Continuous Sign Language Recognition through a Context-Aware Generative Adversarial Network. Sensors 21 (7), 2437. doi:10.3390/s21072437

PubMed Abstract | CrossRef Full Text | Google Scholar

Papastratis, I., Dimitropoulos, K., Konstantinidis, D., and Daras, P. (2020). Continuous Sign Language Recognition through Cross-Modal Alignment of Video and Text Embeddings in a Joint-Latent Space. IEEE Access 8, 91170–91180. doi:10.1109/access.2020.2993650

CrossRef Full Text | Google Scholar

Psaltis, A., Apostolakis, K. C., Dimitropoulos, K., and Daras, P. (2017). Multimodal Student Engagement Recognition in Prosocial Games. IEEE Trans. Games 10 (3), 292–303. doi:10.1109/tciaig.2017.2743341

CrossRef Full Text | Google Scholar

Stefanidis, K., Psaltis, A., Apostolakis, K. C., Dimitropoulos, K., and Daras, P. (2019). Learning Prosocial Skills through Multiadaptive Games: a Case Study. J. Comput. Educ. 6 (1), 167–190. doi:10.1007/s40692-019-00134-8

CrossRef Full Text | Google Scholar

Stergioulas, A., Chatzis, T., Konstantinidis, D., Dimitropoulos, K., and Daras, P. (2021). “3D Hand Pose Estimation via Aligned Latent Space Injection and Kinematic Losses,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.

Google Scholar

Tisserand, Y., Magnenat-Thalmann, N., Unzueta, L., Linaza, M. T., Ahmadi, A., O’Connor, N. E., et al. (2017). “Preservation and Gamification of Traditional Sports,” in Mixed Reality and Gamification for Cultural Heritage. (Cham, Switzerland: Springer International Publishing), 421–446. doi:10.1007/978-3-319-49607-8_17

CrossRef Full Text | Google Scholar

Zhang, H.-B., Zhang, Y.-X., Zhong, B., Lei, Q., Yang, L., Du, J.-X., et al. (2019). A Comprehensive Survey of Vision-Based Human Action Recognition Methods. Sensors 19 (5), 1005. doi:10.3390/s19051005

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: Artificial intelligence, human motion analysis, human centred, machine learning, motion caption

Citation: Dimitropoulos K, Daras P, Manitsaris S, Fol Leymarie F and Calinon S (2021) Editorial: Artificial Intelligence and Human Movement in Industries and Creation. Front. Robot. AI 8:712521. doi: 10.3389/frobt.2021.712521

Received: 20 May 2021; Accepted: 28 June 2021;
Published: 12 July 2021.

Edited and reviewed by:

Astrid Marieke Rosenthal-von Der Pütten, RWTH Aachen University, Germany

Copyright © 2021 Dimitropoulos, Daras, Manitsaris, Fol Leymarie and Calinon. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kosmas Dimitropoulos, dimitrop@iti.gr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.