ORIGINAL RESEARCH article
Front. Public Health
Sec. Digital Public Health
Volume 13 - 2025 | doi: 10.3389/fpubh.2025.1584348
Generative AI's Healthcare Professional Role Creep: A Cross-Sectional Evaluation of Publicly Accessible, Customised Health-Related GPTs Authors
Provisionally accepted- 1College of Medicine and Public Health, Flinders University, Bedford Park, Australia
- 2Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States
- 3Flinders Centre for Innovation in Cancer, Flinders University, Adelaide, South Australia, Australia
- 4Caring Futures Institute, Flinders University, Adelaide, South Australia, Australia
- 5University of Canberra, Canberra, Australia
- 6Central Adelaide Local Health Network, SA Health, Adelaide, Australia
- 7University of Adelaide, Adelaide, South Australia, Australia
- 8University of South Australia, Adelaide, Australia
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Generative artificial intelligence (AI) is advancing rapidly; an important consideration is the public's increasing ability to customise foundational AI models to create publicly accessible applications tailored for specific tasks. This study aims to evaluate the accessibility and functionality descriptions of customised GPTs on the OpenAI GPT store that provide healthrelated information or assistance to patients and healthcare professionals. We conducted a cross-sectional observational study of the OpenAI GPT store from September 2 to 6, 2024, to identify publicly accessible customised GPTs with health-related functions. We searched across general medicine, psychology, oncology, cardiology, and immunology applications. Identified GPTs were assessed for their name, description, intended audience, and usage. Regulatory status was checked across the U.S. Food and Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), and Australian Therapeutic Goods Administration (TGA) databases. A total of 1055 customised, health-related GPTs targeting patients and healthcare professionals were identified, which had collectively been used in over 360,000 conversations. Of these, 587 were psychology-related, 247 were in general medicine, 105 in oncology, 52 in cardiology, 30 in immunology, and 34 in other health specialties. Notably, 624 of the identified GPTs included healthcare professional titles (e.g., doctor, nurse, psychiatrist, oncologist) in their names and/or descriptions, suggesting they were taking on such roles. None of the customised GPTs identified were FDA, EU MDR, or TGA-approved. This study highlights the rapid emergence of publicly accessible, customised, health-related GPTs. The findings raise important questions about whether current AI medical device regulations are keeping pace with rapid technological advancements. The results also highlight the potential "role creep" in AI chatbots, where publicly accessible applications begin to perform-or claim to perform-functions traditionally reserved for licensed professionals, underscoring potential safety concerns.
Keywords: Customised GPTs, Generative AI in healthcare, AI health applications, Medical chatbots, AI regulation, OpenAI GPT Store
Received: 27 Feb 2025; Accepted: 21 Apr 2025.
Copyright: © 2025 Chu, Modi, Menz, Bacchi, Kichenadasse, Paterson, Kovoor, Ramsey, Logan, Wiese, McKinnon, Rowland, Sorich and Hopkins. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Bradley D Menz, College of Medicine and Public Health, Flinders University, Bedford Park, Australia
Ashley Mark Hopkins, College of Medicine and Public Health, Flinders University, Bedford Park, Australia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.