The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Pattern Recognition
Volume 7 - 2024 |
doi: 10.3389/frai.2024.1467051
Multimodal Driver Emotion Recognition Using Motor Activity and Facial Expressions
Provisionally accepted- 1 Autonomous University of Zacatecas, Zacatecas, Mexico
- 2 Universidad Mayor de Chile, Providencia, Chile
- 3 Universidad Continental, Arequipa, Peru
- 4 Catholic University of Santa María, Arequipa, Arequipa, Peru
Driving performance can be significantly impacted when a person experiences intense emotions behind the wheel. Research shows that emotions such as anger, sadness, agitation, and joy can increase the risk of traffic accidents. This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and driver behavior, which are generated by interactions with basic driving elements, along with facial geometry images captured during emotion induction. The research applies machine learning to identify the most relevant motor activity signals for emotion recognition. Furthermore, a pre-trained Convolutional Neural Network (CNN) model is employed to extract probability vectors from images corresponding to the four emotions under investigation. These data sources are integrated through a unidimensional network for emotion classification. The main proposal of this research was to develop a multimodal intelligent model that combines motor activity signals and facial geometry images to accurately recognize four specific emotions (anger, sadness, agitation, and joy) in drivers, achieving a 96.0\% accuracy in a simulated environment. The study confirmed a significant relationship between drivers' motor activity, behavior, facial geometry, and the induced emotions
Keywords: facial emotion recognition, Motor Activity, Driver Emotions, Transfer Learning, Convolutional Neural Network, adas
Received: 19 Jul 2024; Accepted: 04 Nov 2024.
Copyright: © 2024 Espino-Salinas, Luna-García, Celaya-Padilla, Barría-Huidobro, Gamboa Rosales, Rondon and Villalba-Condori. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Carlos H. Espino-Salinas, Autonomous University of Zacatecas, Zacatecas, Mexico
Huizilopoztli Luna-García, Autonomous University of Zacatecas, Zacatecas, Mexico
José M. Celaya-Padilla, Autonomous University of Zacatecas, Zacatecas, Mexico
Cristian Barría-Huidobro, Universidad Mayor de Chile, Providencia, Chile
Nadia K. Gamboa Rosales, Autonomous University of Zacatecas, Zacatecas, Mexico
David Rondon, Universidad Continental, Arequipa, Peru
Klinge O. Villalba-Condori, Catholic University of Santa María, Arequipa, Arequipa, Peru
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.