The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Robot. AI
Sec. Robot Design
Volume 11 - 2024 |
doi: 10.3389/frobt.2024.1475069
This article is part of the Research Topic Advancing AI Algorithms and Morphological Designs for Redundant Robots in Diverse Industries View all articles
Learning signs with NAO: humanoid robot as a tool for helping to learn Colombian Sign Language
Provisionally accepted- Universidad de La Sabana, Chía, Colombia
Sign languages are one of the main rehabilitation methods for dealing with hearing loss. Like any other language, the geographical location will influence on how signs are made. Particularly in Colombia, the hard of hearing population is lacking from education in the Colombian Sign Language, mainly due of the reduce number of interpreters in the educational sector. To help mitigate this problem, Machine Learning binded to data gloves or Computer Vision technologies have emerged to be the accessory of sign translation systems and educational tools, however, in Colombia the presence of this solutions is scarce. On the other hand, humanoid robots such as the NAO have shown significant results when used to support a learning process. This paper proposes a performance evaluation for the design of an activity to support the learning process of all the 11 color-based signs from the Colombian Sign Language. Which consists of an evaluation method with two modes activated through user interaction, the first mode will allow to choose the color sign to be evaluated, and the second will decide randomly the color sign. To achieve this, MediaPipe tool was used to extract torso and hand coordinates, which were the input for a Neural Network. The performance of the Neural Network was evaluated running continuously in two scenarios, first, video capture from the webcam of the computer which showed an overall F1 score of 91.6% and a prediction time of 85.2 ms, second, wireless video streaming with NAO H25 V6 camera which had an F1 score of 93.8% and a prediction time of 2.29 seconds. In addition, we took advantage of the joint redundancy that NAO H25 V6 has, since with its 25 degrees of freedom we were able to use gestures that created nonverbal human-robot interactions, which may be useful in future works where we want to implement this activity with a deaf community.
Keywords: Colombian Sign Language (CSL), neural networks, machine learning, Social Robots, Education, Human-robot interaction (HRI)
Received: 02 Aug 2024; Accepted: 24 Oct 2024.
Copyright: © 2024 Mora-Zarate, Garzón-Castro and Castellanos-Rivillas. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Claudia L. Garzón-Castro, Universidad de La Sabana, Chía, Colombia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.