Skip to main content

ORIGINAL RESEARCH article

Front. Digit. Health
Sec. Digital Mental Health
Volume 6 - 2024 | doi: 10.3389/fdgth.2024.1410758
This article is part of the Research Topic Sociotechnical Factors Impacting AI Integration into Mental Health Care View all articles

Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study

Provisionally accepted
  • 1 School of Nursing, Columbia University, New York City, New York, United States
  • 2 Department of Biomedical Informatics, Vagelos College of Physicians and Surgeons, Columbia University Irving Medical Center, New York City, New York, United States
  • 3 Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, New York, New York, United States
  • 4 Department of Obstetrics and Gynecology, Weill Cornell Medicine, Cornell University, New York, New York, United States
  • 5 Department of Psychiatry, Weill Cornell Medicine, New York, United States

The final, formatted version of the article will be published soon.

    Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health. We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs. We found that multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI. Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.

    Keywords: Artificial Intelligence1, Mental health2, patient engagement3, bioethics aspects4, machine learning5. (Min.5-Max. 8

    Received: 01 Apr 2024; Accepted: 14 Oct 2024.

    Copyright: © 2024 Turchioe, Desai, Harkins, Kim, Zhang, Joly, Pathak, Hermann and Benda. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Meghan Turchioe, School of Nursing, Columbia University, New York City, NY 10032, New York, United States

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.