HYPOTHESIS AND THEORY article
Front. Digit. Health
Sec. Ethical Digital Health
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1492736
This article is part of the Research TopicArtificial Intelligence and Social Equities: Navigating the Intersectionalities in a Digital AgeView all articles
Governance for Anti-Racist AI in Healthcare: Integrating Racism-Related Stress in Psychiatric Algorithms for Black Americans
Provisionally accepted- 1School of Medicine, Yale University, New Haven, United States
- 2The Institute of Living, Hartford, Connecticut, United States
- 3Charles R. Drew University of Medicine and Science, Los Angeles, California, United States
- 4Yale University, New Haven, Connecticut, United States
- 5Rutgers University, Newark, Newark, New Jersey, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
While the world is aware of America's history of enslavement, the ongoing impact of anti-Black racism in the United States remains underemphasized in health intervention modeling. This Perspective argues that algorithmic bias-manifested in the worsened performance of clinical algorithms for Black versus white patients-is significantly driven by the failure to model the cumulative impacts of racism-related stress, particularly racial heteroscedasticity. Racial heteroscedasticity refers to the unequal variance in health outcomes and algorithmic predictions across racial groups, driven by diberential exposure to racism-related stress. This may be particularly salient for Black Americans, where anti-Black bias has wide-ranging impacts that interact with dibering backgrounds of generational trauma, socioeconomic status, and other social factors, promoting unaccounted for sources of variance that are not easily captured with a blanket 'race' factor. Not accounting for these factors deteriorates performance for these clinical algorithms for all Black patients. We outline key principles for anti-racist AI governance in healthcare, including: (1) mandating the inclusion of Black researchers and community members in AI development;(2) implementing rigorous audits to assess anti-Black bias; (3) requiring transparency in how algorithms process race-related data; and (4) establishing accountability measures that prioritize equitable outcomes for Black patients. By integrating these principles, AI can be developed to produce more equitable and culturally responsive healthcare interventions. This anti-racist approach challenges policymakers, researchers, clinicians, and AI developers to fundamentally rethink how AI is created, used, and regulated in healthcare, with profound implications for health policy, clinical practice, and patient outcomes across all medical domains.
Keywords: Anti-Racist AI, racism-related stress, Clinical algorithms, Algorithmic bias, Community-Based Participatory Research
Received: 11 Nov 2024; Accepted: 22 Apr 2025.
Copyright: © 2025 Fields, Black, Thind, Jegede, Aksen, Rosenblatt, Assari, Bellamy, Anderson, Holmes and Scheinost. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Christopher Fields, School of Medicine, Yale University, New Haven, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.