ORIGINAL RESEARCH article
Front. Digit. Health
Sec. Digital Mental Health
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1578917
This article is part of the Research TopicEmotional Intelligence AI in Mental HealthView all 5 articles
Stress can be Detected During Emotion-evoking Smartphone Use: A Pilot Study Using Machine Learning
Provisionally accepted- 1Lehrstuhl für Klinische Psychologie und Psychotherapie, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- 2Machine Learning and Data Analytics Lab, Faculty of Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- 3Translational Digital Health Group, Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The detrimental consequences of stress highlight the need for precise stress detection, as this offers a window for timely intervention. However, both objective and subjective measurements suffer from validity limitations. Contactless sensing technologies using machine learning methods present a potential alternative and could be used to estimate stress from externally visible physiological changes, such as emotional facial expressions. Although previous studies were able to classify stress from emotional expressions with accuracies of up to 88.32%, most works employed a classification approach and relied on data from contexts where stress was induced. Therefore, the primary aim of the present study was to clarify whether stress can be detected from facial expressions of six basic emotions (anxiety, anger, disgust, sadness, joy, love) and relaxation using a prediction approach. To attain this goal, we analyzed video recordings of facial emotional expressions collected from n = 69 participants in a secondary analysis of a dataset from an interventional study. We aimed to explore associations with stress (assessed by the PSS-10 and a self-constructed item one-item stress measure). Comparing two regression machine learning models (Random Forest (RF) and XGBoost), we found that facial emotional expressions were promising indicators of subjective stress scores, with model fit the best when data from all six emotional facial expressions was used to train the model (One item stress measure: MSE (XGB) = 2.31, MAE (XGB) = 1.32, MSE (RF) = 3.86, MAE (RF) = 1.69; PSS-10: MSE (XGB) = 25.65, MAE (XGB) = 4.16, MSE (RF) = 26.32, MAE (RF) = 4.14)). XGBoost showed to be more reliable for prediction, with lower error for both training and test data. The findings provide further evidence that noninvasive video recordings can complement standard objective and subjective markers of stress. subjective stress may not only be detected through standard objective and subjective markers, but also through non-invasive smartphone video recordings.
Keywords: stress, emotion, machine learning, Emotion Expression, Automated stress recognition
Received: 18 Feb 2025; Accepted: 17 Apr 2025.
Copyright: © 2025 Rupp, Kumar, Sadeghi, Schindler-Gmelch, Keinert, Eskofier and Berking. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Lydia Helene Rupp, Lehrstuhl für Klinische Psychologie und Psychotherapie, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.