AUTHOR=Liu Xiaoni , Lai Rui , Wu Chaoling , Yan Changjian , Gan Zhe , Yang Yaru , Zeng Xiangtai , Liu Jin , Liao Liangliang , Lin Yuansheng , Jing Hongmei , Zhang Weilong TITLE=Assessing the utility of artificial intelligence throughout the triage outpatients: a prospective randomized controlled clinical study JOURNAL=Frontiers in Public Health VOLUME=12 YEAR=2024 URL=https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1391906 DOI=10.3389/fpubh.2024.1391906 ISSN=2296-2565 ABSTRACT=

Currently, there are still many patients who require outpatient triage assistance. ChatGPT, a natural language processing tool powered by artificial intelligence technology, is increasingly utilized in medicine. To facilitate and expedite patients’ navigation to the appropriate department, we conducted an outpatient triage evaluation of ChatGPT. For this evaluation, we posed 30 highly representative and common outpatient questions to ChatGPT and scored its responses using a panel of five experienced doctors. The consistency of manual triage and ChatGPT triage was assessed by five experienced doctors, and statistical analysis was performed using the Chi-square test. The expert ratings of ChatGPT’s answers to these 30 frequently asked questions revealed 17 responses earning very high scores (10 and 9.5 points), 7 earning high scores (9 points), and 6 receiving low scores (8 and 7 points). Additionally, we conducted a prospective cohort study in which 45 patients completed forms detailing gender, age, and symptoms. Triage was then performed by outpatient triage staff and ChatGPT. Among the 45 patients, we found a high level of agreement between manual triage and ChatGPT triage (consistency: 93.3–100%, p<0.0001). We were pleasantly surprised to observe that ChatGPT’s responses were highly professional, comprehensive, and humanized. This innovation can help patients win more treatment time, improve patient diagnosis and cure rates, and alleviate the pressure of medical staff shortage.