Skip to main content

ORIGINAL RESEARCH article

Front. Psychiatry
Sec. Computational Psychiatry
Volume 16 - 2025 | doi: 10.3389/fpsyt.2025.1508772

Diagnosis of depression based on facial multimodal data

Provisionally accepted
Nani Jin Nani Jin 1Renjia Ye Renjia Ye 1*Peng Li Peng Li 2*
  • 1 Shanghai University, Shanghai, China
  • 2 Third Xiangya Hospital, Central South University, Changsha, Hunan Province, China

The final, formatted version of the article will be published soon.

    Depression is a serious mental health disease. Traditional scale-based depression diagnosis methods often have problems of strong subjectivity and high misdiagnosis rate, so it is particularly important to develop automatic diagnostic tools based on objective indicators. This study proposes a deep learning method that fuses multimodal data to automatically diagnose depression using facial video and audio data. We use spatiotemporal attention module to enhance the extraction of visual features and combine the Graph Convolutional Network (GCN) and the Long and Short Term Memory (LSTM) to analyze the audio features. Through the multi-modal feature fusion, the model can effectively capture different feature patterns related to depression. We conduct extensive experiments on the publicly available clinical dataset, the Extended Distress Analysis InterviewCorpus (E-DAIC). The experimental results show that we achieve robust accuracy on the E-DAIC dataset, with a Mean Absolute Error (MAE) of 3.51 in estimating PHQ-8 scores from recorded interviews. Compared with existing methods, our model shows excellent performance in multi-modal information fusion, which is suitable for early evaluation of depression.

    Keywords: Depression, multi-modal data, Feature fusion, spatial-temporal attention, artificial intelligence

    Received: 13 Oct 2024; Accepted: 02 Jan 2025.

    Copyright: © 2025 Jin, Ye and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Renjia Ye, Shanghai University, Shanghai, China
    Peng Li, Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan Province, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.