AUTHOR=Elyoseph Zohar , Levkovich Inbar TITLE=Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment JOURNAL=Frontiers in Psychiatry VOLUME=14 YEAR=2023 URL=https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2023.1213141 DOI=10.3389/fpsyt.2023.1213141 ISSN=1664-0640 ABSTRACT=
ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT’s assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.