The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Psychiatry
Sec. Computational Psychiatry
Volume 15 - 2024 |
doi: 10.3389/fpsyt.2024.1505024
This article is part of the Research Topic Mental Health in the Age of Artificial Intelligence View all 4 articles
Artificial Intelligence Conversational Agents in Mental Health: Patients See Potential, But Prefer Humans in the Loop
Provisionally accepted- 1 University of Massachusetts Medical School, Worcester, United States
- 2 Ieso Digital Health, Cambridge, United Kingdom
Background: Digital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients' perceptions and acceptability of AI conversational agents in mental healthcare.Methods: Adults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare.Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes.The sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI's lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision.Conclusions: AI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI's empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness.
Keywords: artificial intelligence, Chatbots, conversational agents, patient perspectives, Qualitative, Mental Health, Anxiety, cognitive behavioral therapy
Received: 01 Oct 2024; Accepted: 26 Dec 2024.
Copyright: © 2024 Lee, Wright, Ferranto, Buttimer, Palmer, Welchman, Mazor, Fisher, Smelson, O'Connor, Fahey and Soni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Apurv Soni, University of Massachusetts Medical School, Worcester, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.