Skip to main content

ORIGINAL RESEARCH article

Front. Digit. Health

Sec. Human Factors and Digital Health

Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1537907

Evaluating Diversity and Stereotypes Amongst AI Generated Representations of Healthcare Providers

Provisionally accepted
  • 1 Department of Computer Science, College of Natural Sciences, The University of Texas at Austin, Austin, Texas, United States
  • 2 Ridgewood Public Schools, Ridgewood, New Jersey, United States
  • 3 Rouse High School, Leander, United States
  • 4 Valley Health System, Paramus, New Jersey, United States

The final, formatted version of the article will be published soon.

    Introduction - Generative artificial intelligence (AI) can simulate existing societal data, which led us to explore diversity and stereotypes among AI-generated representations of healthcare providers. Methods - We used DALL-E 3, a text-to-image generator, to generate 360 images from healthcare profession terms tagged with specific race and sex identifiers. These images were evaluated for sex and race diversity using consensus scoring. To explore stereotypes present in the images, we employed Google Vision to label objects, actions, and backgrounds in the images. Results - We found modest levels of sex diversity (3.2) and race diversity (2.8) on a 5-point scale, where 5 indicates maximum diversity. These findings align with existing workforce statistics, suggesting that Generative AI reflects real-world diversity patterns. The analysis of Google Vision image labels revealed sex and race-linked stereotypes related to appearance, facial expressions, and attire.Discussion - This study is the first of its kind to provide a ML-based framework for quantifying diversity and biases amongst generated AI images of healthcare providers. These insights can guide policy decisions involving the use of Generative AI in healthcare workforce training and recruitment.

    Keywords: Generative AI, Sex diversity, Race diversity, Healthcare provider, stereotypes, DALL.E, Google vision, machine learning

    Received: 02 Dec 2024; Accepted: 31 Mar 2025.

    Copyright: © 2025 Agrawal, Gupta, Agrawal and Gupta. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Himanshu Gupta, Valley Health System, Paramus, New Jersey, United States

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    95% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more