Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci.
Volume 18 - 2024 | doi: 10.3389/fncom.2024.1485121
This article is part of the Research Topic Computational Intelligence for Signal and Image Processing, Volume II View all 8 articles

FacialNet: Facial Emotion Recognition For Mental Health Analysis Using UNet Segmentation With Transfer Learning Model

Provisionally accepted
  • 1 Islamia University of Bahawalpur, Bahawalpur, Pakistan
  • 2 Beibu Gulf University, Qinzhou, China
  • 3 Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Makkah, Saudi Arabia
  • 4 Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
  • 5 Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
  • 6 Department of Computer Science and Information Systems, College of Applied Sciences, University of Almaarefa, Dariyah, Riyadh, Saudi Arabia
  • 7 Yeungnam University, Gyeongsan, North Gyeongsang, Republic of Korea

The final, formatted version of the article will be published soon.

    Facial emotion recognition (FER) can serve as a valuable tool for assessing emotional states, which are often linked to mental health. However, mental health encompasses a broad range of factors that go beyond facial expressions. While FER provides insights into certain aspects of emotional well-being, it can be used in conjunction with other assessments to form a more comprehensive understanding of an individual's mental health. This research work proposes a framework for human FER using UNet image segmentation and transfer learning with the EfficientNetB4 model (called FacialNet). The proposed model demonstrates promising results, achieving an accuracy of 90% for six emotion classes (happy, sad, fear, pain, anger, and disgust) and 96.39% for binary classification (happy and sad). The significance of FacialNet is judged by extensive experiments conducted against various machine learning and deep learning models, as well as state-of-the-art previous research works in FER. The significance of FacialNet 1 In-seop et al.is further validated using a cross-validation technique, ensuring reliable performance across different data splits. The findings highlight the effectiveness of leveraging UNet image segmentation and EfficientNetB4 transfer learning for accurate and efficient human facial emotion recognition, offering promising avenues for real-world applications in emotion-aware systems and effective computing platforms. Experimental findings reveal that the proposed approach performs substantially better than existing works with an improved accuracy of 96.39% compared to existing 94.26%.

    Keywords: facial emotion recognition, UNET, EfficientNet, Transfer Learning, Image proceesing

    Received: 23 Aug 2024; Accepted: 18 Nov 2024.

    Copyright: © 2024 Umer, Xuanzhi, Hakeem, Mohaisen, Al-Hammadi, Alsubai, Innab and Ashraf. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Nisreen Innab, Department of Computer Science and Information Systems, College of Applied Sciences, University of Almaarefa, Dariyah, 71666, Riyadh, Saudi Arabia
    Imran Ashraf, Yeungnam University, Gyeongsan, 712-749, North Gyeongsang, Republic of Korea

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.