AUTHOR=Maya-Martínez Sergio-Uriel , Argüelles-Cruz Amadeo-José , Guzmán-Zavaleta Zobeida-Jezabel , Ramírez-Cadena Miguel-de-Jesús
TITLE=Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
JOURNAL=Frontiers in Robotics and AI
VOLUME=10
YEAR=2023
URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2023.1052509
DOI=10.3389/frobt.2023.1052509
ISSN=2296-9144
ABSTRACT=
Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.
Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired.
Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects.
Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed.
Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.