Skip to main content

SYSTEMATIC REVIEW article

Front. Digit. Health
Sec. Connected Health
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1482712
This article is part of the Research Topic Digital Health Innovations for Patient-Centered Care View all 10 articles

Comparative Analysis of ChatGPT and Gemini (Bard) in Medical Inquiry: A Scoping Review

Provisionally accepted
Fattah H Fattah Fattah H Fattah 1Abdulwahid M Salih Abdulwahid M Salih 1Ameer M Salih Ameer M Salih 2Saywan K Asaad Saywan K Asaad 1Abdullah K Ghafour Abdullah K Ghafour 3Rawa Bapir Rawa Bapir 3Berun A Abdalla Berun A Abdalla 3Snur Othman Snur Othman 4Sasan M Ahmed Sasan M Ahmed 3Sabah Hasan Sabah Hasan 3Yousif M Mahmood Yousif M Mahmood 3Fahmi Kakamad Fahmi Kakamad 1*
  • 1 College of Medicine, University of Sulaimani, Sulaymaniyah, Iraq
  • 2 University of Sulaymaniyah, Sulaymaniyah, Kurdistan, Iraq
  • 3 Smart Health Tower, Sulaymaniyah, Iraq
  • 4 Kscien Organization, Hamdi Str, Azadi Mall , Sulaimani ,Kurdistan, Iraq., Sulaimani, Kurdistan, Iraq

The final, formatted version of the article will be published soon.

    Artificial intelligence and machine learning are popular interconnected technologies. AI chatbots like ChatGPT and Gemini show considerable promise in medical inquiries. This scoping review aims to assess the accuracy and response length (in characters) of ChatGPT and Gemini in medical applications.The eligible databases were searched to find studies published in English from January 1 to October 20, 2023.up to October 20, 2023. The inclusion criteria consisted of studies that focused on using AI in medicine and assessed outcomes based on the accuracy and character count (length) of ChatGPT and Gemini. Data collected from the studies included the first author's name, the country where the study was conducted, the type of study design, publication year, sample size, medical speciality, and the accuracy and response length.The initial search identified 64 papers, with 11 meeting the inclusion criteria, involving 1,177 samples. ChatGPT showed higher accuracy in radiology (87.43% vs. Gemini's 71%) and shorter responses (907 vs. 1,428 characters). Similar trends were noted in other specialties. However, Gemini outperformed ChatGPT in emergency scenarios (87% vs. 77%) and in renal diets with low potassium and high phosphorus (79% vs. 60% and 100% vs. 77%). Statistical analysis confirms that ChatGPT has greater accuracy and shorter responses than Gemini in medical studies, with a p-value of <.001 for both metrics.This Scoping review suggests that ChatGPT may demonstrate higher accuracy and provide shorter responses than Gemini in medical studies.

    Keywords: ChatGPT, Google Bard, medical inquiries, comparison, Madical AI

    Received: 21 Aug 2024; Accepted: 21 Jan 2025.

    Copyright: © 2025 Fattah, Salih, Salih, Asaad, Ghafour, Bapir, Abdalla, Othman, Ahmed, Hasan, Mahmood and Kakamad. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Fahmi Kakamad, College of Medicine, University of Sulaimani, Sulaymaniyah, Iraq

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.