Skip to main content

ORIGINAL RESEARCH article

Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 7 - 2024 | doi: 10.3389/frai.2024.1462819
This article is part of the Research Topic Cross-Modal Learning in Medicine: Bridging Large Language Models with Medical Image Analysis View all articles

Beyond the Stereotypes: Artificial Intelligence (AI) Image Generation and Diversity in Anesthesiology

Provisionally accepted
  • 1 University Hospitals of Geneva, Geneva, Geneva, Switzerland
  • 2 University of Ankara School of Medicine, Ankara, Türkiye
  • 3 Department of Anesthesia, Antwerp University Hospital, Edegem, Belgium
  • 4 2. University of Ankara School of Medicine, Department of Anesthesiology and Intensive Care Unit, Ankara, Türkiye
  • 5 Biostatistics and Research Method Center of ULiège, Liège, Belgium
  • 6 Department of Anesthesia and Perioperative Care, Zuckerberg San Francisco General Hospital and Trauma Center, University of California, San Francisco, San Francisco, North Carolina, United States
  • 7 AZ Sint-Jan Brugge-Oostende AV, Brugge, West Flanders, Belgium
  • 8 Institute for Medical Education, University of Bern, Switzerland, Bern, Switzerland
  • 9 Center for Health Technology and Services Research, Faculty of Medicine, University of Porto, Porto, Portugal

The final, formatted version of the article will be published soon.

    Introduction: Artificial Intelligence (AI) is increasingly being integrated into anesthesiology to enhance patient safety, improve efficiency, and streamline various aspects of practice.Objective: This study aims to evaluate whether AI-generated images accurately depict the demographic racial and ethnic diversity observed in the Anesthesia workforce and to identify inherent social biases in these images.Methods: This cross-sectional analysis was conducted from January to February 2024. Demographic data were collected from the American Society of Anesthesiologists (ASA) and the European Society of Anesthesiology and Intensive Care (ESAIC). Two AI text-to-image models, ChatGPT DALL-E 2 and Midjourney, generated images of anesthesiologists across various subspecialties. Three independent reviewers assessed and categorized each image based on sex, race/ethnicity, age, and emotional traits.Results: A total of 1200 images were analysed. We found significant discrepancies between AI-generated images and actual demographic data. The models predominantly portrayed anesthesiologists as White, with ChatGPT DALL-E2 at 64.2% and Midjourney at 83.0%. Moreover, male gender was higly associated with White ethnicity by ChatGPT DALL-E2 (79.1%) and with non-White ethnicity by Midjourney (87%). Age distribution also varied significantly, with younger anesthesiologists underrepresented. The analysis also revealed predominant traits such as "masculine," "attractive," and "trustworthy" across various subspecialties.AI models exhibited notable biases in gender, race/ethnicity, and age representation, failing to reflect the actual diversity within the anesthesiologist workforce.These biases highlight the need for more diverse training datasets and strategies to mitigate bias in AI-generated images to ensure accurate and inclusive representations in the medical field.

    Keywords: Anesthesiology, biases, artificial intelligence, Gender equity, race/ethnicity, stereotypes

    Received: 10 Jul 2024; Accepted: 02 Sep 2024.

    Copyright: © 2024 Gisselbaek, Köselerli, Suppan, Minsart, Meco, Seidel, Albert, Barreto Chang, Saxena and Berger-Estilita. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Joana Berger-Estilita, Institute for Medical Education, University of Bern, Switzerland, Bern, Switzerland

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.