Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Sci.
Sec. Human-Media Interaction
Volume 6 - 2024 | doi: 10.3389/fcomp.2024.1427463

How Large Language Model-Powered Conversational Agents Influence Decision Making in Domestic Medical Triage Contexts

Provisionally accepted
Catalina Gomez Caballero Catalina Gomez Caballero *Junjie Yin Junjie Yin *Chien-Ming Huang Chien-Ming Huang Mathias Unberath Mathias Unberath *
  • Johns Hopkins University, Baltimore, United States

The final, formatted version of the article will be published soon.

    Effective delivery of healthcare depends on timely and accurate triage decisions, directing patients to appropriate care pathways and reducing unnecessary visits.Artificial Intelligence (AI) solutions, particularly those based on Large Language Models (LLMs), may enable non-experts to make better triage decisions at home, thus easing the healthcare system's load. We investigate how LLM-powered conversational agents influence non-experts in making triage decisions, further studying different persona profiles embedded via prompting. We designed a randomized experiment where participants first assessed patient symptom vignettes independently, then consulted one of the two agent profiles-rational or empathicfor advice, and finally revised their triage ratings. We used linear models to quantify the effect of the agent profile and confidence on the weight of advice. We examined changes in confidence and accuracy of triage decisions, along with participants' perceptions of the agents. In a study with 49 layperson participants, we found that persona profiles can be differentiated in LLM-powered conversational agents.However, these profiles did not significantly affect the weight of advice. Notably, less confident participants were more influenced by LLM advice, leading to larger adjustments to initial decisions. AI guidance improved alignment with correct triage levels and boosted confidence in participants' decisions. While LLM advice improves triage recommendations accuracy, confidence plays an important role Preprint. Under review. in its adoption. Our findings raise design considerations for human-AI interfaces, highlighting two key aspects: encouraging appropriate alignment with LLMs' advice and ensuring that people are not easily swayed in situations of uncertainty.

    Keywords: Human-AI interaction, decision-making, Empirical studies, LLMS, Triage

    Received: 03 May 2024; Accepted: 26 Sep 2024.

    Copyright: © 2024 Gomez Caballero, Yin, Huang and Unberath. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Catalina Gomez Caballero, Johns Hopkins University, Baltimore, United States
    Junjie Yin, Johns Hopkins University, Baltimore, United States
    Mathias Unberath, Johns Hopkins University, Baltimore, United States

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.