ORIGINAL RESEARCH article

Front. Public Health

Sec. Digital Public Health

Volume 13 - 2025 | doi: 10.3389/fpubh.2025.1566982

This article is part of the Research TopicThe Emerging Role of Large Language Model Chatbots in Gastroenterology and Digestive EndoscopyView all articles

Comparative Evaluation of the Accuracy and Reliability of ChatGPT Versions in Providing Information on Helicobacter Pylori Infection

Provisionally accepted
Yi  YeYi YeEn-Dian  ZhengEn-Dian ZhengQiao-Li  LanQiao-Li LanLe-Can  ; WuLe-Can ; WuHao-Yue  SunHao-Yue SunBei-Bei  XuBei-Bei XuYing  WangYing WangMiaomiao  TengMiaomiao Teng*
  • Postgraduate training base Alliance of Wenzhou Medical University, Wenzhou, China

The final, formatted version of the article will be published soon.

Objective This study aimed to evaluate the accuracy and reliability of responses provided by three versions of to questions related to Helicobacter pylori (Hp) infection, as well as to explore their potential applications within the healthcare domain.: A panel of experts compiled and refined a set of 27 clinical questions related to Hp. These questions were presented to each ChatGPT version, generating three distinct sets of responses. The responses were evaluated and scored by three gastroenterology specialists utilizing a 5-point Likert scale, with an emphasis on accuracy and comprehensiveness. To assess response stability and reliability, each question was submitted three times over three consecutive days. Results: Statistically significant differences in the Likert scale scores were observed among the three ChatGPT versions (p < 0.0001). ChatGPT-4o demonstrated the best performance, achieving an average score of 4.46 (standard deviation 0.82) points.Despite its high accuracy, ChatGPT-4o exhibited relatively low repeatability. In contrast, ChatGPT-3.5 exhibited the highest stability, although it occasionally provided incorrect answers. In terms of readability, ChatGPT-4 achieved the highest Flesch Reading Ease score of 24.88 (standard deviation 0.44), however, no statistically significant differences in readability were observed among the versions.All three versions of ChatGPT were effective in addressing Hp-related questions, with ChatGPT-4o delivering the most accurate information. These findings suggest that artificial intelligence-driven chat models hold significant potential in healthcare, facilitating improved patient awareness, self-management, and treatment compliance, as well as supporting physicians in making informed medical decisions by providing accurate information and personalized recommendations.

Keywords: artificial intelligence, Helicobacter pylori, Large Language Model, Patient Education, ChatGPT

Received: 07 Feb 2025; Accepted: 17 Apr 2025.

Copyright: © 2025 Ye, Zheng, Lan, Wu, Sun, Xu, Wang and Teng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Miaomiao Teng, Postgraduate training base Alliance of Wenzhou Medical University, Wenzhou, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research integrity at Frontiers

94% of researchers rate our articles as excellent or good

Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


Find out more