Skip to main content

ORIGINAL RESEARCH article

Front. Psychiatry, 31 January 2025
Sec. Computational Psychiatry
This article is part of the Research Topic Mental Health in the Age of Artificial Intelligence View all 4 articles

Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop

Hyein S. Lee,&#x;Hyein S. Lee1,2†Colton Wright&#x;Colton Wright1†Julia FerrantoJulia Ferranto1Jessica ButtimerJessica Buttimer3Clare E. PalmerClare E. Palmer3Andrew WelchmanAndrew Welchman3Kathleen M. MazorKathleen M. Mazor4Kimberly A. FisherKimberly A. Fisher4David SmelsonDavid Smelson4Laurel O&#x;Connor,Laurel O’Connor1,5Nisha Fahey,Nisha Fahey1,6Apurv Soni,,*Apurv Soni1,2,4*
  • 1Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
  • 2Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States
  • 3Ieso Digital Health, Cambridge, United Kingdom
  • 4Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
  • 5Department of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
  • 6Department of Pediatrics, University of Massachusetts Chan Medical School, Worcester, MA, United States

Background: Digital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients’ perceptions and acceptability of AI conversational agents in mental healthcare.

Methods: Adults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare. Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes.

Results: The sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI’s lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision.

Conclusions: AI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI’s empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness.

1 Introduction

Mental illness affects over 57.8 million adults in the United States, accounting for more than 1 in 5 individuals (1). Despite the significant prevalence, many do not receive adequate care. Prior to the COVID-19 pandemic, only 41% of US adults diagnosed with anxiety, mood, or substance use disorders reported receiving treatment in the previous year (24). This treatment gap is largely attributed to a shortage of mental healthcare professionals, a persistent issue in the US healthcare system (58). Currently, more than 165 million people live in mental healthcare professional shortage areas in the US, with only 27.2% of mental health needs across all counties met by available psychiatrists (9). Untreated mental health can lead to worsening symptoms, decreased quality of life, and higher risks of comorbid conditions.

The growing popularity of Artificial Intelligence (AI) and Machine Learning offers promising digital medicine solutions for mental health access (10, 11). However, there remain significant uncertainties regarding patient acceptability (12). While AI has been successfully implemented in various healthcare domains, its application in mental health care is still emerging and understudied (13). Previous studies have demonstrated mixed results concerning the efficacy of AI conversational agents in delivering therapeutic interventions, with some research indicating benefits in accessibility and patient engagement (1417), while others highlight concerns about the lack of empathy, accuracy, and the ability to handle complex mental health issues (1821). Moreover, there is a notable gap in understanding patients’ perspectives on the acceptability of AI conversational agents, particularly among those with anxiety disorders. Existing research primarily focuses on the technical capabilities and preliminary outcomes of AI applications, often neglecting the critical aspect of patient experiences and perceptions (22, 23). This gap is particularly concerning given the increasing integration of AI in mental health services. Therefore, it is essential to explore patient viewpoints to ensure that these digital tools are not only effective but also acceptable and trusted by the users they aim to serve.

To address these knowledge gaps, we conducted a qualitative study using semi-structured interviews with 29 adults with self-reported mild to moderate anxiety. Participants were recruited from the UMass Memorial Health system and engaged in discussions about their experiences and perceptions of AI conversational agents in mental health care. Through this approach, we aimed to capture a diverse range of patient experiences, perceptions, and perspectives on the acceptability of using AI conversational agents for mental health support. Our analysis focused on identifying key themes and sub-themes that reflect the nuanced views of patients regarding AI-driven mental health interventions.

2 Methods

Our study utilized data collected through semi-structured qualitative interviews conducted with adult patients from the UMass Memorial Health system. The data collection period spanned from December 2022 to February 2023. These interviews were designed to explore patient experiences and perceptions regarding the use of AI conversational agents in mental healthcare. This study was approved by the UMass Chan Medical School Institutional Review Board (Protocol #1340270).

2.1 Participant eligibility and recruitment

The study included 29 adult participants with self-reported experiences of mild to moderate anxiety. Eligibility criteria required participants to be at least 18 years old, have a self-reported diagnosis of anxiety, be able to read, write, and speak English, and have the capacity to provide informed consent. Exclusion criteria included visual impairment without access to assistive technology, a history of suicide attempts or psychosis, recent changes in psychotropic medication, acute psychosis, or posing a danger to self or others.

A HIPAA waiver was obtained to perform automated electronic medical record review and identify potentially eligible patients who received care at the UMass Memorial Health system. Eligible patients were contacted via recruitment emails, with additional recruitment through primary care clinicians and study flyers placed in clinic waiting rooms. Exclusion criteria were assessed both through chart review and initial screening phone interviews. Out of 784 potentially eligible individuals, 54 provided consent to contact, 37 provided written consent, and 29 completed the study procedures (Figure 1). Participants were compensated for their participation.

Figure 1
www.frontiersin.org

Figure 1. Participant consort diagram.

2.2 Procedure

Participants attended 90-minute study sessions, during which they completed demographic and anxiety symptom surveys and participated in a qualitative interview. Demographic data included age, gender, race, ethnicity, employment status, and education level. Anxiety levels were assessed using the Generalized Anxiety Disorder scale (GAD-7), a validated 7-item questionnaire with a 4-point Likert scale to measure anxiety severity over the past two weeks (24). Sum scores of 0-4 indicated minimal anxiety, 5-9 mild anxiety, 10-14 moderate anxiety, and 15-21 severe anxiety. The qualitative interviews were conducted remotely via Zoom by a user experience researcher and were supported by an in-person study coordinator. Interviews were recorded, transcribed, and observational notes during each session were taken. The interviews had three sequential components: 1) participants were asked questions about their past experiences with AI conversational agents and perceptions of AI conversational agents in mental healthcare, 2) participants engaged with a mental health conversational agent app to ground and standardize their experiences, and 3) participants were asked additional general questions related to the acceptability of AI conversational agents in mental healthcare, and more. This manuscript is comprised of data collected only from the first and third parts of the interview. The second part of the interview was comprised of text-based conversation with a prototype AI conversational agent using a tree-based dialogue system underpinned by natural language understanding. The prototype agent did not respond to user input with any generative responses and performance of the prototype is not the focus of this manuscript.

2.3 Data analysis

Data were analyzed using rapid qualitative analysis as informed by Hamilton (25, 26). A rapid qualitative analysis approach was used considering the resources available for this pilot study, and because it yields rigorous, structured, and actionable data in a shorter timeframe compared to traditional qualitative analysis methods (2729). The benefit of rapid qualitative analysis is its ability to quickly identify insights (e.g., gaps in care, facilitators/barriers, etc.) and guide decision-making and implementation strategies for targeted healthcare issues (3033). First, a summary template was created so that a domain was mapped onto each interview question. Second, the transcript data and observational notes for each participant was divided up by three researchers, summarized into each domain, and organized into a matrix. Third, the three researchers reviewed one another’s summaries so that each transcript was reviewed twice by a researcher (and at least once by a researcher who was not present at the interview), and summaries were combined. The three researchers met regularly with two senior researchers experienced in qualitative research to discuss similarities and differences in domain summaries, reduce duplication, synthesize themes and sub-themes within and across domains, and identify important quotes. Discrepancies across researchers were discussed and reconciled to reach consensus in data interpretation over regular team meetings. By combining independent analysis, triangulation through cross-reviewing with multiple researchers, and iterative discussion of thematic interpretations, we sought to reduce bias and promote rigor in our analysis. In this study, only data pertaining to past experiences, perceptions, and acceptability of digital mental health tools in mental healthcare were analyzed.

3 Results

3.1 Participant characteristics

The sample consisted of 29 adults with self-reported diagnosis of anxiety. The demographic characteristics of the participants are detailed in Table 1. Participants tended to be younger, with 34.5% aged 18-24, 31% aged 25-34, and 34.5% aged 35 and older. The majority of participants were non-Hispanic White (65.5%) and female (72.4%). Most participants were employed (65.5%) or in school (27.6%). Educational attainment varied, with 44.8% holding a high school diploma or some college education, 34.5% holding a bachelor’s degree, and 20.7% holding a master’s degree or higher. Anxiety levels were assessed using the GAD-7 scale, with 34.5% of participants screening for minimal anxiety, 37.9% for mild anxiety, 24.1% for moderate anxiety, and 3.4% for severe anxiety symptoms.

Table 1
www.frontiersin.org

Table 1. Study participant characteristics (n=29).

3.2 Past experiences with AI conversational agents

Participants had diverse past experiences with AI conversational agents, primarily in retail and customer service contexts (Table 2). Positive experiences were often associated with newer technologies like ChatGPT, which impressed participants with its conversational capabilities. However, these positive experiences were tempered by an awareness of AI’s current limitations. Many participants cited negative experiences, including frustrations with conversational agents’ lack of personalization and empathy, and their inability to understand specific requests. As one participant noted, “Most of my experience with using chatbots has been kind of irritating … It always seems to be when you’re having a customer service problem or you need help with your bank account or this or that and all you ever want is to talk to a real person and you feel like you have to go through 400 chatbots before you can get an answer” (P24: Female, 28, Minimal Anxiety).

Table 2
www.frontiersin.org

Table 2. Past experiences with AI conversational agents.

3.3 Perceptions of AI mental health conversational agents

When asked about participants expectations about an app that “addresses mental health concerns via a conversation delivered by an AI powered agent,” participants expressed a range of perceptions as reflected in Table 3.

Table 3
www.frontiersin.org

Table 3. Positive and negative perceptions of AI conversational agents in mental healthcare.

3.3.1 Positive perceptions

A majority (n=21) reported positive but hesitant opinions, recognizing AI’s potential to increase care accessibility and efficiency, yet remaining skeptical about its current capabilities (Table 3). Participants noted the potential for increasing accessibility especially for those who find it challenging to access traditional therapy. One participant remarked, “I think if it can be helpful, it’s a great additional tool. You know mental health care is pretty limited and not available to most people, so I think if it can be used to sort of make mental health care available with integrity to more people, I think that’s a great idea” (P14: Female, 65, Moderate Anxiety). Others appreciated the convenience and immediate availability of AI conversational agents, which could serve as a valuable supplement to traditional therapy.

3.3.2 Negative perceptions

Around half of participants (n=17) conveyed skepticism, doubt, or concerns about the capabilities and application of AI mental health conversational agents (Table 3).

3.3.2.1 Lack of empathy

Many participants doubted AI’s ability to provide empathetic and thoughtful responses, a critical component of effective mental health care. A participant expressed skepticism, stating, “I do feel that there needs to be some sort of human connection, or like intellect there because I don’t think AI will always get what somebody is feeling.” (P19: Female, 19, Mild Anxiety). A few individuals also emphasized that AI’s perceived lack of empathy may create higher barriers for older individuals to use AI conversational agents, especially when compounded with their general unfamiliarity with this emerging technology.

3.3.2.2 Technical limitations

Issues such as the conversational agents’ inability to understand complex mental health needs and generate appropriate responses were significant concerns. One participant shared, “Just concerned that the digital therapy might not understand what I’m totally feeling or may not respond in the way that I want them to respond” (P29: Female, 21, Moderate Anxiety). In this context, several were also skeptical of AI’s therapeutic potential for more severe mental health conditions and questioned AI’s ability to adequately navigate and address emergency situations, such as suicidal crises.

3.3.2.3 Data privacy concerns

Participants were worried about the security and confidentiality of their personal data. Fears of hacking were elevated due to the sensitive nature of data related to mental illness and vulnerable emotional states. For instance, one participant noted, “I mean there’s always the chance that the system could get hacked or something … I feel like if you were having super intense like anxious moments or depression moments that if it got leaked you wouldn’t feel good about that being leaked. If that makes sense” (P03: Female, 21, Mild Anxiety). Some participants associated data privacy with trust and specified that they were wary of their data being sold, shared, or monitored. One participant highlighted concerns with providing in-depth personal information due to potential for AI model training leading to bias and stereotyping if an AI conversational agent were to diagnose mental illnesses.

3.4 Acceptability of AI conversational agents in mental healthcare landscape

When asked “how do you feel about the use of Artificial Intelligence in mental healthcare?” participants discussed the acceptability of AI conversational agents within the mental healthcare landscape, as reflected in Table 4. In this section, only 17 out of 29 participants were asked questions around acceptability of AI conversational agent apps due to time constraints in the interview.

Table 4
www.frontiersin.org

Table 4. Acceptability of AI conversational agent applications in mental healthcare.

3.4.1 Acceptable amount of human involvement in AI-driven therapy

A majority of participants (n=11) felt that implementing AI conversational agents without any human involvement was not acceptable and believed AI conversational agents should not replace therapy with mental health professionals.

3.4.1.1 Some amount of human involvement necessary

Many participants expressed that AI conversational agents could still be helpful but preferred them in combination with therapy led by a person (either in-person or over telehealth) rather than as a stand-alone service. One participant noted, “I think [accessing digital therapy in an app] is great as long as it’s like in conjunction with actual therapy. I don’t think it’s good to have it on its own. I think it needs to be in contrast with in-person” (P05: Female, 25, Mild Anxiety). This theme is consistent across interviews, indicating a preference for AI conversational agents to act as supplementary tools rather than replacements for human therapists.

3.4.1.2 Replacement of human therapists

Some participants were worried that AI conversational agents might be viewed as replacements for traditional therapy, which they believed could be dangerous. One participant mentioned, “[I have concerns with] people thinking that it’s the only type of mental health support that they need like whether they’re going to actual therapy or they should be going to actual therapy and they’re using this instead” (P19: Female, 19, Mild Anxiety).

3.4.1.3 No human involvement necessary except for emergencies

A minority of participants felt that no human involvement in AI conversational agents was acceptable, emphasizing the benefits of anonymity in decreasing barriers to starting therapy due to stigma or fear of judgment. One participant mentioned, “I feel good about it, because when you’re in person and discuss what are you feeling, sometimes you might not say how you feel because you feel like ‘oh, maybe that person is judging’. But this is more like you’re writing, and you don’t have a person there so this will make it more easier to share how you feel.” (P07: Female, 26, Mild Anxiety). Participants also often contextualized their acceptance within the current reality of limited access to mental health resources. As an important distinction, almost all participants expected human intervention to be absolutely necessary in an AI conversational agent app if a patient were to mention self-harm, suicidal ideation, or ideas of harming others.

3.4.2 More acceptable functions of AI conversational agents

Although many participants expressed hesitancy towards AI’s application for higher-order tasks without human involvement such as providing therapy or conversation, some felt that AI could be useful and more acceptable for more basic tasks. These tasks included renewing prescriptions, managing follow-up appointments, matching patients to healthcare providers, diagnosis support, administrative tasks (e.g., initial screenings), and baseline mental health interventions that fall short of delving into complex emotions. One participant mentioned, “I can’t imagine [AI tools] replacing a therapist. But I think in the more administrative tasks it could work … like initial screenings and, like I said, matching the patient to the correct health care provider.” (P24: Female, 28, Minimal Anxiety).

4 Discussion

This study explored the experiences, perceptions, and acceptability of AI conversational agents for mental health support among adults with self-report of mild to moderate anxiety. Our key findings revealed that while participants recognized the potential benefits of AI conversational agents in increasing accessibility to mental health care, they also expressed significant concerns regarding the potential lack of empathy, technical limitations, and data privacy. Most participants preferred AI conversational agents as supplementary tools rather than replacements for human therapists, emphasizing the need for some level of human involvement to ensure effective mental health care. These findings underscore the importance of addressing these concerns to enhance the acceptability and effectiveness of AI-driven mental health interventions.

The rapid diffusion of generative AI tools, demonstrated most poignantly by the rise of popularity and use of tools like ChatGPT, Claude, Gemini, etc. has positioned AI conversational agents as promising tools in supplementing care in mental health (3436). Research has demonstrated that mental health conversational agents can increase engagement with therapeutic content and improve mental health symptoms (3739). Many benefits of AI conversational agents identified in our study were similar to those found in previous studies, such as lowering barriers to care, improving access to therapeutic content, and alleviating the burden on current mental health professionals (14, 15, 17, 40). Multiple participants highlighted convenience and anonymity as key components of AI conversational agents that can lower barriers to care. These benefits may be even greater for those who may avoid seeking care due to stigma, fear of judgment, or anxiety interacting with real people. Previous studies have found that conversational agents can decrease stigma (41) and may increase the likelihood that patients disclose emotional and sensitive information compared to when interacting with other humans (4244). The benefits of AI conversational agents may enhance equity of care for underserved groups, including rural, low-income, LGBTQ+, and racial/ethnic communities who already struggle to find affordable and culturally competent services (19, 45, 46).

Our analysis identified increased access to care as one of the main benefits of AI conversational agents. Participants often contextualized this benefit within the current mental health crisis where the need for care is substantially outpacing the supply of available mental health workers (47, 48). Despite the number of psychiatrists entering the workforce increasing by 26.3% from 2016 to 2021 (49), it is estimated that 6,129 additional psychiatrists are still needed as of 2024 to alleviate the current national shortage (47). The widespread adoption of smartphones (50) makes AI conversational agents a feasible and scalable solution to bridge this care gap without requiring patients to wait for or travel to appointments. Our study’s findings support the potential of AI conversational agents to increase access to care, a crucial benefit amid the current mental health crisis, and further support the role of AI conversational agents in assisting mental health professionals by providing data-driven insights and personalized, supportive interactions, thereby alleviating their workload.

Despite the recognized benefits, participants in our study raised significant concerns about the perceived lack of empathy, technical reliability, and data privacy of AI conversational agents. These issues are well-documented in the literature (14, 15). Gerke et al. highlighted the ethical and legal challenges of AI-driven healthcare, focusing on data security, safety, and algorithmic fairness, all of which are especially critical in mental health care (51). Patients are more likely to engage with digital health tools when they feel their data is secure (40, 52, 53). Establishing transparent data handling practices and ensuring user trust are essential for successful AI conversational agent adoption. Concerns about AI’s ability to understand and respond to complex mental health needs were also prevalent, especially in context of emergency mental health crises. In reference to these concerns, along with worries about the lack of empathy and personalization in AI responses, participants felt that AI conversational agents should be used as supplementary tools and retain some level human involvement. Consistent with previous research (40, 54), our findings highlight important implementation considerations around the acceptability of AI conversational agents as standalone mental health interventions. These gaps in current AI mental health applications emphasize the need for advanced algorithms to handle the nuances of mental health needs, improve empathy and personalization, and ensure data privacy (13). Addressing these technical and ethical concerns is crucial for enhancing the acceptability, efficacy, and adoption of AI conversational agents, building on previous research, and guiding future developments (51, 55). Our study provides a patient-centered perspective, highlighting real-world concerns and expectations. Additionally, it presents a roadmap for evolution of technologies in this space as they build on early successes of tree-based dialogue systems with more large-language model incorporated agents.

4.1 Limitations

While our study provides valuable insights into the perceptions and acceptability of AI conversational agents for mental health support among adults with mild to moderate anxiety, there are several limitations to consider.

First, the sample size comprised predominantly young, non-Hispanic White women, which may limit the generalizability of the findings. The limited diversity of our sample may have affected our results as men and younger people have been shown to be more open to AI technologies in healthcare (56, 57) and be more likely to have used AI conversational agents (58, 59), though these findings have been inconsistent across studies (6063). This selection bias could mean that the experiences and perceptions of other demographic groups were not adequately represented. To mitigate this limitation, future studies should aim to include a more diverse sample size to enhance the comprehensiveness of the results. Similarly, the study’s setting in a single healthcare system may limit the applicability of the findings to other contexts. The specific characteristics of the UMass Memorial Health system and its patient population may not reflect those of other healthcare systems. Considering the growing concerns around potential biases of AI algorithms (64), lack of diversity in training datasets (65), and inequitable access to care (66), future studies with diverse groups of patients and across different healthcare settings and geographic locations​ are warranted.

Second, the rapidly evolving nature of AI technology means that the capabilities and limitations of AI conversational agents are continuously changing. The findings of this study are based on the state of AI technology from December 2022 to February 2023 and may not fully capture perceptions around technology advances since then. The timing of our study coincided with the rise of AI conversational agents use in the general public, most notably with the release of ChatGPT by OpenAI on November 30, 2022 (67). It is likely that perceptions of participants may be rapidly evolving as they experience or learn about new capabilities and acceptability of AI conversational agents. Previous studies have shown that prior knowledge of or familiarity with AI in healthcare can have positive moderating effects on perceptions of AI in healthcare (6870). However, we did not formally assess participants’ knowledge of or familiarity with AI conversational agents or ChatGPT, which may have introduced unmeasurable bias and variability in participants’ perceptions of AI conversational agents. It is important for ongoing research to continuously evaluate and update the understanding of AI conversational agent applications in mental health care as the technology evolves.

Despite these limitations, our study provides a foundational understanding of patient perceptions and acceptability of AI conversational agents in mental health care. By recognizing and addressing these limitations, future research can build on our findings and contribute to the development of more effective and acceptable AI-driven mental health interventions.

4.2 Implications

The findings of our study have practical implications for patients, providers, payers, and policymakers in the healthcare ecosystem. AI conversational agents can bridge gaps in mental health care access, especially for medically underserved populations. By improving accessibility and reducing stigma, AI conversational agents provide a convenient, anonymous platform for individuals hesitant to seek traditional therapy, encouraging proactive mental health management. For providers, AI conversational agents can extend reach and efficiency by handling routine inquiries and initial support, allowing clinicians to focus on more complex cases. This approach can lead to better resource allocation and improved patient outcomes. Payers benefit from the cost-effectiveness of AI conversational agents, as they reduce the burden on mental health professionals and enable early intervention, lowering overall healthcare costs. Insurance companies should consider covering AI-driven mental health services to promote wider adoption. Policymakers play a critical role in regulating AI use in mental health care. Our study highlights the need for robust data privacy and security standards to protect patient information. Policymakers should develop regulations ensuring the ethical use of AI technologies, addressing data privacy, algorithmic bias, and transparency. Policies supporting research and development in AI mental health applications can drive innovation and improve efficacy and acceptability. The integration of AI conversational agents in mental health care has the potential to transform the landscape of mental health services. By addressing the limitations identified in our study and leveraging AI’s strengths, stakeholders can create a more accessible, efficient, and effective mental health care system. The future of mental health care depends on collaborative efforts of patients, providers, payers, and policymakers to harness AI’s power while ensuring ethical and patient-centered practices.

Data availability statement

The datasets presented in this article are not readily available because they consist of qualitative interviews. The data are not publicly available to protect the identities of participants. Further inquiries can be directed to the corresponding author. Requests to access the datasets should be directed to Apurv Soni, YXB1cnYuc29uaUB1bWFzc21lZC5lZHU=.

Ethics statement

The studies involving humans were approved by UMass Chan Medical School Institutional Review Board. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

HL: Formal analysis, Writing – original draft, Writing – review & editing. CW: Data curation, Formal analysis, Writing – review & editing, Project administration. JF: Data curation, Formal analysis, Writing – review & editing, Project administration. JB: Data curation, Writing – review & editing. CP: Conceptualization, Funding acquisition, Methodology, Writing – review & editing. AW: Conceptualization, Funding acquisition, Methodology, Writing – review & editing. KM: Writing – review & editing, Conceptualization, Methodology. KF: Writing – review & editing, Conceptualization, Methodology. DS: Conceptualization, Writing – review & editing, Methodology. LO: Conceptualization, Methodology, Writing – review & editing. NF: Conceptualization, Funding acquisition, Methodology, Supervision, Writing – review & editing. AS: Conceptualization, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. Some of the research costs were subsidized by the Ieso Digital Health Branch in the United States. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article, or the decision to submit it for publication. Ieso Digital Health supported data collection for the study, but the conceptualization and analysis was performed by the UMass Program in Digital Medicine.

Acknowledgments

We are grateful to all our study participants who provided their time and thoughtful perspectives. We would also like to thank ieso Digital Health for partnering with the Program in Digital Medicine and supporting this study.

Conflict of interest

Investigators JB, CP, and AW are employees of ieso Digital Health Limited or its subsidiaries.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be constructed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Substance Abuse and Mental Health Services Administration. Key Substance Use and Mental Health Indicators in the United States: Results from the 2021 National Survey on Drug Use and Health. Rockville, Maryland: Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration (2022). Available at: https://www.samhsa.gov/data/report/2021-nsduh-annual-national-report (Accessed January 19, 2024).

Google Scholar

2. Evans-Lacko S, Aguilar-Gaxiola S, Al-Hamzawi A, Alonso J, Benjet C, Bruffaerts R, et al. Socio-economic variations in the mental health treatment gap for people with anxiety, mood, and substance use disorders: results from the WHO World Mental Health (WMH) surveys. Psychol Med. (2018) 48:1560–71. doi: 10.1017/S0033291717003336

PubMed Abstract | Crossref Full Text | Google Scholar

3. Wang PS, Lane M, Olfson M, Pincus HA, Wells KB, Kessler RC. Twelve-month use of mental health services in the United States: results from the National Comorbidity Survey Replication. Arch Gen Psychiatry. (2005) 62:629–40. doi: 10.1001/archpsyc.62.6.629

PubMed Abstract | Crossref Full Text | Google Scholar

4. Alonso J, Liu Z, Evans-Lacko S, Sadikova E, Sampson N, Chatterji S, et al. Treatment gap for anxiety disorders is global: Results of the World Mental Health Surveys in 21 countries. Depress Anxiety. (2018) 35:195–208. doi: 10.1002/da.2018.35.issue-3

Crossref Full Text | Google Scholar

5. Rhodes KV, Vieth TL, Kushner H, Levy H, Asplin BR. Referral without access: for psychiatric services, wait for the beep. Ann Emerg Med. (2009) 54:272–8. doi: 10.1016/j.annemergmed.2008.08.023

PubMed Abstract | Crossref Full Text | Google Scholar

6. Malowney M, Keltz S, Fischer D, Boyd JW. Availability of outpatient care from psychiatrists: a simulated-patient study in three U.S. cities. Psychiatr Serv Wash DC. (2015) 66:94–6. doi: 10.1176/appi.ps.201400051

PubMed Abstract | Crossref Full Text | Google Scholar

7. Thomas KC, Ellis AR, Konrad TR, Holzer CE, Morrissey JP. County-level estimates of mental health professional shortage in the United States. Psychiatr Serv Wash DC. (2009) 60:1323–8. doi: 10.1176/ps.2009.60.10.1323

PubMed Abstract | Crossref Full Text | Google Scholar

8. Cunningham PJ. Beyond parity: primary care physicians’ perspectives on access to mental health care. Health Aff Proj Hope. (2009) 28:w490–501. doi: 10.1377/hlthaff.28.3.w490

PubMed Abstract | Crossref Full Text | Google Scholar

9. Bureau of Health Workforce, Health Resources and Services Administration (HRSA), U.S. Department of Health & Human Services. Designated Health Professional Shortage Areas Statistics (2023). Available online at: https://data.hrsa.gov/topics/health-workforce/shortage-areas (Accessed January 18, 2024).

Google Scholar

10. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25:44–56. doi: 10.1038/s41591-018-0300-7

PubMed Abstract | Crossref Full Text | Google Scholar

11. Kim J, Leonte KG, Chen ML, Torous JB, Linos E, Pinto A, et al. Large language models outperform mental and medical health care professionals in identifying obsessive-compulsive disorder. NPJ Digit Med. (2024) 7:193. doi: 10.1038/s41746-024-01181-x

PubMed Abstract | Crossref Full Text | Google Scholar

12. Stade EC, Stirman SW, Ungar LH, Boland CL, Schwartz HA, Yaden DB, et al. Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. NPJ Ment Health Res. (2024) 3:12. doi: 10.1038/s44184-024-00056-z

PubMed Abstract | Crossref Full Text | Google Scholar

13. Balan R, Dobrean A, Poetar CR. Use of automated conversational agents in improving young population mental health: a scoping review. NPJ Digit Med. (2024) 7:75. doi: 10.1038/s41746-024-01072-1

PubMed Abstract | Crossref Full Text | Google Scholar

14. Boucher EM, Harake NR, Ward HE, Stoeckl SE, Vargas J, Minkel J, et al. Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev Med Devices. (2021) 18:37–49. doi: 10.1080/17434440.2021.2013200

PubMed Abstract | Crossref Full Text | Google Scholar

15. Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and opinions of patients about mental health chatbots: scoping review. J Med Internet Res. (2021) 23:e17828. doi: 10.2196/17828

PubMed Abstract | Crossref Full Text | Google Scholar

16. Ahmed A, Ali N, Aziz S, Abd-alrazaq AA, Hassan A, Khalifa M, et al. A review of mobile chatbot apps for anxiety and depression and their self-care features. Comput Methods Programs BioMed Update. (2021) 1:100012. doi: 10.1016/j.cmpbup.2021.100012

Crossref Full Text | Google Scholar

17. Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Can J Psychiatry. (2019) 64:456–64. doi: 10.1177/0706743719828977

PubMed Abstract | Crossref Full Text | Google Scholar

18. Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Front Digit Health. (2023) 5:1278186. doi: 10.3389/fdgth.2023.1278186

PubMed Abstract | Crossref Full Text | Google Scholar

19. Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J Med Internet Res. (2019) 21:e13216. doi: 10.2196/13216

PubMed Abstract | Crossref Full Text | Google Scholar

20. Coghlan S, Leins K, Sheldrick S, Cheong M, Gooding P, D’Alfonso S. To chat or bot to chat: Ethical issues with using chatbots in mental health. Digit Health. (2023) 9:20552076231183542. doi: 10.1177/20552076231183542

PubMed Abstract | Crossref Full Text | Google Scholar

21. Carr S. [amp]]lsquo;AI gone mental’: engagement and ethics in data-driven technology for mental health. J Ment Health. (2020) 29:125–30. doi: 10.1080/09638237.2020.1714011

PubMed Abstract | Crossref Full Text | Google Scholar

22. Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. (2020) 22:e16021. doi: 10.2196/16021

PubMed Abstract | Crossref Full Text | Google Scholar

23. Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial intelligence-based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. (2023) 25:e40789. doi: 10.2196/40789

PubMed Abstract | Crossref Full Text | Google Scholar

24. Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. (2006) 166:1092–7. doi: 10.1001/archinte.166.10.1092

PubMed Abstract | Crossref Full Text | Google Scholar

25. Hamilton AB. Qualitative methods in rapid turn-around health services research. In: VA HSR&D National Cyberseminar Series: Spotlight on Women’s Health (2013) Washington DC, USA: U.S. Department of Veterans Affairs. Available at: https://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/video_archive.cfm?SessionID=780 (Accessed December 23, 2023).

Google Scholar

26. Hamilton AB, Finley EP. Qualitative methods in implementation research: An introduction. Psychiatry Res. (2019) 280:112516. doi: 10.1016/j.psychres.2019.112516

PubMed Abstract | Crossref Full Text | Google Scholar

27. Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, et al. Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the Veterans Health Administration. Implement Sci IS. (2019) 14:11. doi: 10.1186/s13012-019-0853-y

PubMed Abstract | Crossref Full Text | Google Scholar

28. Taylor B, Henshall C, Kenyon S, Litchfield I, Greenfield S. Can rapid approaches to qualitative analysis deliver timely, valid findings to clinical leaders? A mixed methods study comparing rapid and thematic analysis. BMJ Open. (2018) 8:e019993. doi: 10.1136/bmjopen-2017-019993

PubMed Abstract | Crossref Full Text | Google Scholar

29. Nevedal AL, Reardon CM, Opra Widerquist MA, Jackson GL, Cutrona SL, White BS, et al. Rapid versus traditional qualitative analysis using the Consolidated Framework for Implementation Research (CFIR). Implement Sci. (2021) 16:67. doi: 10.1186/s13012-021-01111-5

PubMed Abstract | Crossref Full Text | Google Scholar

30. Vindrola-Padros C, Johnson GA. Rapid techniques in qualitative research: A critical review of the literature. Qual Health Res. (2020) 30:1596–604. doi: 10.1177/1049732320921835

PubMed Abstract | Crossref Full Text | Google Scholar

31. Hamilton AB, Cohen AN, Glover DL, Whelan F, Chemerinski E, McNagny KP, et al. Implementation of evidence-based employment services in specialty mental health. Health Serv Res. (2013) 48:2224–44. doi: 10.1111/hesr.2013.48.issue-6pt2

Crossref Full Text | Google Scholar

32. Lewinski AA, Crowley MJ, Miller C, Bosworth HB, Jackson GL, Steinhauser K, et al. Applied rapid qualitative analysis to develop a contextually appropriate intervention and increase the likelihood of uptake. Med Care. (2021) 59:S242–51. doi: 10.1097/MLR.0000000000001553

PubMed Abstract | Crossref Full Text | Google Scholar

33. St. George SM, Harkness AR, Rodriguez-Diaz CE, Weinstein ER, Pavia V, Hamilton AB. Applying rapid qualitative analysis for health equity: lessons learned using “EARS” With latino communities. Int J Qual Methods. (2023) 22:160940692311649. doi: 10.1177/16094069231164938

PubMed Abstract | Crossref Full Text | Google Scholar

34. Cheng S, Chang C, Chang W, Wang H, Liang C, Kishimoto T, et al. The now and future of ChatGPT and GPT in psychiatry. Psychiatry Clin Neurosci. (2023) 77:592–6. doi: 10.1111/pcn.v77.11

PubMed Abstract | Crossref Full Text | Google Scholar

35. D’Alfonso S. AI in mental health. Curr Opin Psychol. (2020) 36:112–7. doi: 10.1016/j.copsyc.2020.04.005

PubMed Abstract | Crossref Full Text | Google Scholar

36. Torous J, Bucci S, Bell IH, Kessing LV, Faurholt-Jepsen M, Whelan P, et al. The growing field of digital psychiatry: current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry. (2021) 20:318–35. doi: 10.1002/wps.20883

PubMed Abstract | Crossref Full Text | Google Scholar

37. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. (2017) 4:e19. doi: 10.2196/mental.7785

PubMed Abstract | Crossref Full Text | Google Scholar

38. Beatty C, Malik T, Meheli S, Sinha C. Evaluating the therapeutic alliance with a free-text CBT conversational agent (Wysa): a mixed-methods study. Front Digit Health. (2022) 4:847991. doi: 10.3389/fdgth.2022.847991

PubMed Abstract | Crossref Full Text | Google Scholar

39. Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomized controlled trial. JMIR Ment Health. (2018) 5:e64. doi: 10.2196/mental.9782

PubMed Abstract | Crossref Full Text | Google Scholar

40. Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit Health. (2019) 5:205520761987180. doi: 10.1177/2055207619871808

PubMed Abstract | Crossref Full Text | Google Scholar

41. Kim T, Ruensuk M, Hong H. (2020). In helping a vulnerable bot, you help yourself: designing a social bot as a care-receiver to promote mental health and reduce stigma, in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: Association for Computing Machinery. pp. 1–13. doi: 10.1145/3313831.3376743

Crossref Full Text | Google Scholar

42. Lucas GM, Gratch J, King A, Morency LP. It’s only a computer: Virtual humans increase willingness to disclose. Comput Hum Behav. (2014) 37:94–100. doi: 10.1016/j.chb.2014.04.043

Crossref Full Text | Google Scholar

43. Lucas GM, Rizzo A, Gratch J, Scherer S, Stratou G, Boberg J, et al. Reporting mental health symptoms: breaking down barriers to care with virtual human interviewers. Front Robot AI. (2017) 4:51. doi: 10.3389/frobt.2017.00051

Crossref Full Text | Google Scholar

44. Ho A, Hancock J, Miner AS. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. J Commun. (2018) 68:712–33. doi: 10.1093/joc/jqy026

PubMed Abstract | Crossref Full Text | Google Scholar

45. Sun CF, Correll CU, Trestman RL, Lin Y, Xie H, Hankey MS, et al. Low availability, long wait times, and high geographic disparity of psychiatric outpatient care in the US. Gen Hosp Psychiatry. (2023) 84:12–7. doi: 10.1016/j.genhosppsych.2023.05.012

PubMed Abstract | Crossref Full Text | Google Scholar

46. Parenteau AM, Boyer CJ, Campos LJ, Carranza AF, Deer LK, Hartman DT, et al. A review of mental health disparities during COVID-19: Evidence, mechanisms, and policy recommendations for promoting societal resilience. Dev Psychopathol. (2023) 35:1821–42. doi: 10.1017/S0954579422000499

PubMed Abstract | Crossref Full Text | Google Scholar

47. Kaiser Family Foundation. Mental Health Care Health Professional Shortage Areas (HPSAs). State Health Facts (2023). Available online at: https://www.kff.org/other/state-indicator/mental-health-care-health-professional-shortage-areas-hpsas/?currentTimeframe=0&sortModel=%7B%22colId%22:%22Location%22,%22sort%22:%22asc%22%7D (Accessed January 18, 2024).

Google Scholar

48. American Psychological Association. 2022 COVID-19 Practitioner Impact Survey (2022). Available online at: https://www.apa.org/pubs/reports/practitioner/2022-covid-psychologist-workload.pdf (Accessed January 19, 2024).

Google Scholar

49. Association of American Medical Colleges. 2022 Physician Specialty Data Report Executive Summary (2023). Available online at: https://www.aamc.org/data-reports/data/2022-physician-specialty-data-report-executive-summary (Accessed August 29, 2024).

Google Scholar

50. Steinhubl SR, Muse ED, Topol EJ. The emerging field of mobile health. Sci Transl Med. (2015) 7:284rv3. doi: 10.1126/scitranslmed.aaa3487

PubMed Abstract | Crossref Full Text | Google Scholar

51. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Artificial Intelligence in Healthcare. Amsterdam, Netherlands: Elsevier (2020). p. 295–336. Available at: https://linkinghub.elsevier.com/retrieve/pii/B9780128184387000125 (Accessed August 29, 2024).

Google Scholar

52. Kretzschmar K, Tyroll H, Pavarini G, Manzini A, Singh I, NeurOx Young People’s Advisory Group. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (Chatbots) in mental health support. BioMed Inform Insights. (2019) 11:117822261982908. doi: 10.1177/1178222619829083

PubMed Abstract | Crossref Full Text | Google Scholar

53. Madanian S, Nakarada-Kordic I, Reay S, Chetty T. Patients’ perspectives on digital health tools. PEC Innov. (2023) 2:100171. doi: 10.1016/j.pecinn.2023.100171

PubMed Abstract | Crossref Full Text | Google Scholar

54. Chew HSJ, Achananuparp P. Perceptions and needs of artificial intelligence in health care to increase adoption: scoping review. J Med Internet Res. (2022) 24:e32939. doi: 10.2196/32939

PubMed Abstract | Crossref Full Text | Google Scholar

55. Sterling WA, Sobolev M, Van Meter A, Guinart D, Birnbaum ML, Rubio JM, et al. Digital technology in psychiatry: survey study of clinicians. JMIR Form Res. (2022) 6:e33676. doi: 10.2196/33676

PubMed Abstract | Crossref Full Text | Google Scholar

56. Antes AL, Burrous S, Sisk BA, Schuelke MJ, Keune JD, DuBois JM. Exploring perceptions of healthcare technologies enabled by artificial intelligence: an online, scenario-based survey. BMC Med Inform Decis Mak. (2021) 21:221. doi: 10.1186/s12911-021-01586-8

PubMed Abstract | Crossref Full Text | Google Scholar

57. Zhang Z, Genc Y, Xing A, Wang D, Fan X, Citardi D. Lay individuals’ perceptions of artificial intelligence (AI)-empowered healthcare systems. Proc Assoc Inf Sci Technol. (2020) 57:e326. doi: 10.1002/pra2.v57.1

Crossref Full Text | Google Scholar

58. Draxler F, Buschek D, Tavast M, Hämäläinen P, Schmidt A, Kulshrestha J, et al. Gender, Age, and Technology Education Influence the Adoption and Appropriation of LLMs (2023). Available online at: https://arxiv.org/abs/2310.06556 (Accessed January 29, 2024).

Google Scholar

59. Tartaglia J, Jaghab BI, Ismail M, Hänsel K, Meter AV, Kirschenbaum M, et al. Assessing health technology literacy and attitudes of patients in an urban outpatient psychiatry clinic: cross-sectional survey study. JMIR Ment Health. (2024) 11:e6034. doi: 10.2196/63034

PubMed Abstract | Crossref Full Text | Google Scholar

60. Cinalioglu K, Elbaz S, Sekhon K, Su CL, Rej S, Sekhon H. Exploring differential perceptions of artificial intelligence in health care among younger versus older canadians: results from the 2021 canadian digital health survey. J Med Internet Res. (2023) 25:e38169. doi: 10.2196/38169

PubMed Abstract | Crossref Full Text | Google Scholar

61. Chalutz Ben-Gal H. Artificial intelligence (AI) acceptance in primary care during the coronavirus pandemic: What is the role of patients’ gender, age and health awareness? A two-phase pilot study. Front Public Health. (2023) 10:931225. doi: 10.3389/fpubh.2022.931225

PubMed Abstract | Crossref Full Text | Google Scholar

62. Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello CP, et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med. (2023) 6:111. doi: 10.1038/s41746-023-00852-5

PubMed Abstract | Crossref Full Text | Google Scholar

63. Wutz M, Hermes M, Winter V, Köberlein-Neu J. Factors influencing the acceptability, acceptance, and adoption of conversational agents in health care: integrative review. J Med Internet Res. (2023) 25:e46548. doi: 10.2196/46548

PubMed Abstract | Crossref Full Text | Google Scholar

64. Liu C. Addressing bias and inclusivity in AI-driven mental health care. Psychiatr News. (2024) 59. doi: 10.1176/appi.pn.2024.10.10.21

Crossref Full Text | Google Scholar

65. Timmons AC, Duong JB, Simo Fiallo N, Lee T, Vo HPQ, Ahle MW, et al. A call to action on assessing and mitigating bias in artificial intelligence applications for mental health. Perspect Psychol Sci. (2023) 18:1062–96. doi: 10.1177/17456916221134490

PubMed Abstract | Crossref Full Text | Google Scholar

66. Robinson A, Flom M, Forman-Hoffman VL, Histon T, Levy M, Darcy A, et al. Equity in digital mental health interventions in the United States: where to next? J Med Internet Res. (2024) 26:e59939. doi: 10.2196/59939

PubMed Abstract | Crossref Full Text | Google Scholar

67. Mollick E. ChatGPT Is a Tipping Point for AI. Harvard Business Review (2022). Available online at: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai (Accessed January 29, 2024).

Google Scholar

68. Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med Inform Decis Mak. (2020) 20:170. doi: 10.1186/s12911-020-01191-1

PubMed Abstract | Crossref Full Text | Google Scholar

69. Catalina QM, Fuster-Casanovas A, Vidal-Alaball J, Escalé-Besa A, Marin-Gomez FX, Femenia J, et al. Knowledge and perception of primary care healthcare professionals on the use of artificial intelligence as a healthcare tool. Digit Health. (2023) 9:20552076231180511. doi: 10.1177/20552076231180511

PubMed Abstract | Crossref Full Text | Google Scholar

70. Kerstan S, Bienefeld N, Grote G. Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare. Risk Analysis. (2024) 44:939–57. doi: 10.1111/risa.14216

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: artificial intelligence, chatbots, conversational agents, patient perspectives, qualitative, mental health, anxiety, cognitive behavioral therapy

Citation: Lee HS, Wright C, Ferranto J, Buttimer J, Palmer CE, Welchman A, Mazor KM, Fisher KA, Smelson D, O’Connor L, Fahey N and Soni A (2025) Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop. Front. Psychiatry 15:1505024. doi: 10.3389/fpsyt.2024.1505024

Received: 01 October 2024; Accepted: 26 December 2024;
Published: 31 January 2025.

Edited by:

Jasmine M. Noble, Mood Disorders Society of Canada, Canada

Reviewed by:

Nicole Martinez-Martin, Stanford University, United States
Michael Sobolev, Cornell Tech, United States
Jill K. Murphy, St. Francis Xavier University, Canada

Copyright © 2025 Lee, Wright, Ferranto, Buttimer, Palmer, Welchman, Mazor, Fisher, Smelson, O’Connor, Fahey and Soni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Apurv Soni, YXB1cnYuc29uaUB1bWFzc21lZC5lZHU=

†These authors share first authorship

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.