Skip to main content

EDITORIAL article

Front. Psychol., 31 May 2022
Sec. Emotion Science
This article is part of the Research Topic Facial Expression Recognition and Computing: An Interdisciplinary Perspective View all 14 articles

Editorial: Facial Expression Recognition and Computing: An Interdisciplinary Perspective

  • 1State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
  • 2University of Chinese Academy of Sciences, Beijing, China
  • 3School of Electronic and Information Engineering, Southwest University, Chongqing, China
  • 4Université de Lyon, CNRS, École Centrale de Lyon LIRISUMR5205, Lyon, France
  • 5Department of Electronic and Electrical Engineering, Brunel University London, London, United Kingdom
  • 6Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, United Kingdom
  • 7Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
  • 8Division of Musculoskeletal and Dermatological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom

Through the configuration of facial muscles, facial expressions are assumed to reflect a person's internal feelings, emotions, motives, and needs (Ekman et al., 1972). Facial expression recognition plays a crucial role in social interaction. It has been extensively studied in the fields of psychology and artificial intelligence (Russell, 1994; Corneanu et al., 2016; Wood et al., 2016; Liu et al., 2021). The overall goal of this Research Topic was trying to build bridges between human and machine recognition. On the one hand, we provided the latest developments in facial expression recognition, aiming to further understand the cognitive mechanism of how human processes expressions. On the other hand, we collected some studies that use artificial intelligence technology to recognize these expressions.

The first denominator of the collected papers is the attempt to report the latest development in recognizing facial expressions. Balconi and Fronda provided a new perspective to induce and recognize facial expression of emotions based on autobiographic memories rather than emotional movies or pictures. They proposed three steps for creating a database through recalling past autobiographical events of emotional memory. The first step requires collecting individual experiences through mnemonic recall using semi-structured interviews of autobiographical events. The second step requires creating specific algorithms for encoding autobiographical memories. The third step requires the encoding emotional experiences in a personalized linguistic way. Besides, another research conducted by Zhang et al. systematically explored a recognition process for emotional cartoon expressions (happy, sad, and neutral) and the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Qu et al. explored the relationship between facial expressions and time perception.

In addition, a good way to understand the cognitive mechanism of facial expressions processing is through comparison between mental disease and healthy adults. Ma, Guo, et al. investigated the relationship between facial expression recognition and cognitive ability in patients with depression. The results demonstrated that the performance of facial expression recognition is related to the decline of cognitive function, especially for negative emotion. Another study also conducted by Ma, Zhao, et al. found that patients with unipolar depression (UD) had lower performance in recognizing negative expressions, whilst bipolar disorder (BD) had lower accuracy in recognizing positive expressions. Mo et al. explored the confusion effects between depressive patients and healthy controls. Participants were asked to classify each facial expression in a two-alternative forced choice paradigm. Results showed that depressive patients were more inclined to confuse a negative emotion (i.e., anger and disgust) with other expression. Hyniewska et al. found the difference in recognizing facial expression between borderline personality disorder (BPD) and healthy adults. Their results manifested that the ability of emotion recognition in BPD patients was as good as that in healthy individuals, except for the contempt, which were recognized more accurately by BPD patients.

The second denominator is the attempt to report the latest development of artificial intelligence in this domain. Pereira et al. investigated whether existing emotion recognition technology could detect social signals in media interview. Non-verbal signals including facial expression, hand gestures, vocal behavior, and honest signals were captured. The interviews were divided into effective and poor communication exemplars according to the trainers and neutral observers. The correlation-based feature selection method was employed to locate the best feature combination. Naive Bayes analysis produced the best recognition results. Rodríguez-Fuertes et al. analyzed the facial expression of Spanish political candidates in the elections by the algorithms provided in the AFFDEX platform. The basic emotions of each politician were identified and compared through the facial expression analysis. The speech topics associated to the emotions were also identified. Whether there were differences shown by each candidate in every emotion was investigated as well. Namba examined how the feedback of facial expressions affected learning tasks. Learning rate for facial expression feedback was lower than that for symbolic feedback, while no difference between two conditions was found in deck selection or computational model parameters, and no correlation between task indicators and the results of depressive questionnaires was reported.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

The paper has been funded by National Natural Science Foundation of China (62061136001 and 32071055) and National Social Science Foundation of China (19ZDA363).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Corneanu, C. A., Simon, M. O., Cohn, J. F., and Guerrero, S. E. (2016). Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1548–1568. doi: 10.1109/TPAMI.2016.2515606

PubMed Abstract | CrossRef Full Text | Google Scholar

Ekman, P. M., Friesen, W., and Ellsworth, P. (1972). Emotion in the Human Face: Guidelines for Research and an Integration of Findings. New York, NY: Pergamon Press.

Google Scholar

Liu, M., Liu, C. H., Zheng, S., Zhao, K., and Fu, X. (2021). Reexamining the neural network involved in perception of facial expression: a meta-analysis. Neurosci. Biobehav. Rev. 131, 179–191. doi: 10.1016/j.neubiorev.2021.09.024

PubMed Abstract | CrossRef Full Text | Google Scholar

Russell, J. A. (1994). Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115, 102–141. doi: 10.1037/0033-2909.115.1.102

PubMed Abstract | CrossRef Full Text | Google Scholar

Wood, A., Rychlowska, M., Korb, S., and Niedenthal, P. (2016). Fashioning the face: sensorimotor simulation contributes to facial expression recognition. Trends Cogn. Sci. 20, 227–240. doi: 10.1016/j.tics.2015.12.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: facial expression, recognition, emotion, machine recognition, artificial intelligence

Citation: Zhao K, Chen T, Chen L, Fu X, Meng H, Yap MH, Yuan J and Davison AK (2022) Editorial: Facial Expression Recognition and Computing: An Interdisciplinary Perspective. Front. Psychol. 13:940630. doi: 10.3389/fpsyg.2022.940630

Received: 10 May 2022; Accepted: 17 May 2022;
Published: 31 May 2022.

Edited and reviewed by: Florin Dolcos, University of Illinois at Urbana-Champaign, United States

Copyright © 2022 Zhao, Chen, Chen, Fu, Meng, Yap, Yuan and Davison. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tong Chen, c_tong@swu.edu.cn; Xiaolan Fu, fuxl@psych.ac.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.