
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
OPINION article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 8 - 2025 | doi: 10.3389/frai.2025.1545869
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Artificial intelligence (AI) has permeated many aspects of daily life, including medicine, in recent years. As of 2021, 343 AI-enabled medical devices had been approved by the United States Food and Drug Administration, with many more in development (1). Most notable thus far has been AI's ability to assist with every step of radiology workflow: it can determine the appropriateness of imaging, recommend the most appropriate imaging exam, predict wait times or appointment delays, and interpret imaging, with much more potential utilizations (2). The World Health Organization proposed that AI tools be integrated into healthcare to improve efficiency and achieve sustainable health-related development (3). AI in healthcare can reduce costs and administrative burdens, reduce waiting times for patients to receive care, improve diagnostic abilities and patient care, facilitate data management, and expedite discovery (4,5).However, the advancement of AI in healthcare comes with unique drawbacks. For example, data security and privacy are at risk and must be improved, as patients may more readily and unknowingly provide consent for covert data collection methods (6,7). Use of AI must be seriously reconsidered if it poses a risk to patient confidentiality, a non-negotiable in healthcare. With the ability of AI to rapidly gather and analyze large amounts of patient data, controlling the scope of its use becomes a challenge: these tools may progress to collect and disclose data without patient consent or direct investigator oversight (5). In addition, as most healthcare-based AI research has been conducted in non-clinical settings, rolling out AI in certain clinical settings may result in non-evidence-based practice (6). For example, clinicians may feel tempted to use AI for tasks beyond their validation, and training data may not adequately represent the scenarios clinicians encounter (8). In fact, many studies on AI in healthcare have been administered in non-clinical settings (5).That is not to say AI should not be used in healthcare. It does, however, require immense consideration in how it is designed and why it is utilized. Some have contended that a goal of developing AI for healthcare should be to minimize health disparities and make the healthcare system more equitable (1,9). Yet, many characteristics of AI make this goal difficult to achieve. As such, there is a growing body of literature that discusses AI's role in both closing and perpetuating these inequalities (10)(11)(12). As the ability of AI is directly proportional to the quality of the training sets used, authors have addressed concerns regarding bias in training datasets and lack of diversity in development teams ultimately resulting in AI-driven disparities in care (5,(13)(14)(15). This article draws from existing literature to add to the ongoing conversation about the implications of AI in healthcare disparities. Specifically, we discuss economic implications, the explainability of AI systems, and the importance of compassionate care. Ultimately, while AI may indeed confer benefits to the healthcare system, it remains far from the goal of closing healthcare disparities and may, instead, backfire.One essential consideration in any kind of social disparity is economics. The United States is notorious for having the highest healthcare expenditure globally, with healthcare costing $3.5 trillion, or 17.9% of the Gross Domestic Product (16). Any measure to decrease this economic burden -either in the US or internationally -may be attractive. AI has the potential to save billions in annual healthcare costs (17). AI may greatly streamline workflow, even in non-clinical tasks. An automated system may alleviate administrative burdens such as scheduling patients, estimating wait times, and billing insurance companies (2,17,18). Such workflow optimization may reduce the cost of healthcare delivery by cutting out intermediaries that typically handle these mundane tasks. In turn, patients' financial responsibility related to their care may be reduced.On the clinical side, AI may be used to screen for and diagnose conditions, stratify disease risk, and devise treatment plans (16). It may significantly reduce medical errors and factors are associated with adverse outcomes (4). Eventually, as technology advances, it may even perform procedures, given that it is deemed ethical, safe and evidence-based. While these benefits may seem like simply a perk to those practicing in physician-rich areas, they could become indispensable to those in areas affected by shortages of medical professionals (19). Urban and rural communities bear the brunt of this inequity, with many struggling to access both primary and specialty care (20). It has been estimated that by 2030, there may be a shortage of up to 104,900 physicians in the United States (20). As such, AI implementation in these underserved populations may help to alleviate these challenges and improve disparities regarding access to care (19). Furthermore, AI assistants may help decrease physician burnout and therefore improve quality of care (21).These advantages of AI are conferred only with the proper development, installation and maintenance of these systems. AI requires immense investment. One model for an AI glaucoma screening tool in the Changjiang county in China estimated that the fifteen-year accumulated incremental cost of using this tool was $434,903.20 for approximately 2000 patients (22). While the costs of this screening tool are arguably worth early detection and reduced disease progression, it may be impractical to roll out to larger populations. Health institutions in wealthy countries may easily make this investment. But what about institutions in developing countries? Community hospitals with limited government funding? Practices in rural areas with less purchasing power? Even if analyses demonstrate that costs are saved in the long run, the upfront investment may be too large an obstacle (16).Once a system is developed, purchased, and installed, maintenance becomes another issue.Software updates, advanced computing technologies, and ever-increasing cloud storage requirements add to costs (1). The evolving cybersecurity needs to protect patient health information may create further barriers to widespread application of AI (23). These all-around cost barriers are more nuanced than the mere ability to implement AI in practice. Inevitably, there are AI algorithms with higher and lower levels of sophistication, infrastructures that are more and less robust, and security measures that are stronger and weaker. The AI system that institutions choose will be closely tied to their financial status. Of course, AI development will then leave behind under-resourced communities.Currently, there is a lack of "explainable" AI regarding algorithms and data sets that play a role in decision making (24). In other words -exactly how do these technologies work? How are they making these decisions? These are questions that even developers themselves cannot answer; we know they work, yet nobody can fully explain how. This "black box" of AI holds important implications to healthcare disparities worldwide. Machine learning (ML) is a component of AI which involves automated decision making based on datasets (25). Detecting and correcting biases based on limited training sets is an ethical prerequisite of justice in AI-and ML-based clinical decision-making (20). In other words, explainable AI enables developers to identify and correct training set-based biases that currently skew algorithms (10,13).The discussion of justice behind explainable AI requires additional considerations. Explainable AI models keep model developers accountable for their work, as lack of accountability precedes error (24). This concern is compounded by the fact that patients who are less literate are less likely to ask questions or seek more information about their care (26). Since these patients may be less prepared to participate in shared decision making, they may not challenge questionable decisions (27).AI should be treated as a tool to support decision-making, not one to make decisions independently. For example, AI prescription systems have been developed to aid physician workflow and prevent human error (28,29). Inevitably, physicians will encounter scenarios in which the AI recommendation conflicts with their clinical judgement. Some of these scenarios may arise if AI systems are not trained on datasets that adequately represent the populations they treat, thereby generating recommendations poorly aligned with the realities of patients' needs (5). This challenge is particularly relevant to minority communities that have historically been under-studied (15). Healthcare providers should critically assess the AI recommendation in the context of their clinical experience and patient preferences. Institutions should establish clear policies on how to accept or reject AI suggestions to maintain quality patient care.Justice in explainable AI systems is important also in that more transparent technologies will foster patient trust in providers and the healthcare system. Unexplainable, opaque models, on the other hand, may exacerbate the mistrust that already pervades the healthcare system. This mistrust is particularly prevalent in socially and economically marginalized communities (30). A key component of trust in underprivileged populations is the patient's comfort with the physician and physicians' personal involvement in patient care (31). As such, we may see that the unexplainable black box of AI and ML -if not handled correctly -would certainly exacerbate these concerns. Lack of explanation for these impersonal, automated algorithms may further alienate this vulnerable population and widen health disparities.Even if we are to elucidate the black box, can AI ever replace the physician-patient relationship in delivering empathic care? Currently, it seems unlikely -one recent study demonstrated that healthcare chatbots delivering both empathetic and sympathetic responses to patients in fact lowered patients' perception of their authenticity (32). In contrast, empathy and sympathy expressed by human physicians did not induce this negative effect (32). This lack of perceived authenticity may not only undermine patients' subjective satisfaction with their AI providers but may also objectively worsen patient outcomes. While some AI tools provided sound biomedical recommendations for diabetes management, they overlooked psychosocial components that are also necessary for glycemic control (33). Algorithms that determine A1c goals, calculate medication dosages, and send prescriptions may certainly help optimize patient care. However, recommendations poorly tailored to psychosocial challenges disproportionately affect those with greater social barriers. Continuing the stand-alone case of diabetes, for example, significant social barriers to care include having the ability to afford healthy food, the free time for follow-up visits and the literacy to understand health information (34). Now combine this diabetes with a slew of other health conditions, medications, unemployment concerns and an ailing family member. Surely, physicians can manage this patient in countless different ways. There is no one correct path. Regardless, it is imperative that health providers -human or AI-based -address these concerns with compassion.Palliative care, which emphasizes relieving suffering and optimizing quality of life in end-oflife care, is a field in which compassion is key (35). While AI may help assist in decision-making, it risks depersonalizing cases and lacking empathy when patients and their families need it the most. Death and dying are often rooted in culture, personal beliefs, and spirituality. The experience is deeply personal and unique to each family (35). Whereas some encourage open communication about death, others feel uncomfortable with it; whereas some value life-prolonging measures regardless of prognosis, others less so (36). Palliative care AI models risk imposing a "one-size-fits-all" model of care based on aWestern training dataset (35). Once again, understudied populations and cultural minorities fall behind in AI's "understanding" -or lack thereof -of their values.Society at large, including regulators, policy makers, insurance companies, healthcare professionals and patients should carefully consider incorporating AI practices into the practice and business of medicine. Regulators have raised concerns over the need for regulation of clinical AI as well as generalizability to different populations (37). Another area of concern relevant to several stakeholders including healthcare providers and regulators is legal responsibility for AI clinical decision making (37). This fear exists for physicians in the scenario of a medical error made by AI and conversely for accusations of negligent care for not using AI. Physicians were neither prepared for nor agreed with assuming responsibility for errors made by AI, while AI developers believed they should not be liable since they do not practice medicine (37). Each side felt they only understood "part of the whole" when it comes to AI, further highlighting the need for explainable AI. Appropriate oversight by policy makers and regulators is needed to ensure accountability and promote development of explainable AI. These risks may be further mitigated by informing patients that AI was involved in decision making (38).Certain narratives have pitted AI as a rival to the skills and education of physicians, with claims that AI will one day replace physicians (38). AI remains solely a tool to assist in clinical setting with the final decision being made by a human. Rhetoric that continues to pit AI against physicians will only hinder the incorporation of AI into clinical practice (38). Patients will not benefit from AI replacing their physicians, but they also will not benefit from avoiding AI altogether.In discussing the role of AI in closing healthcare disparities, we must consider the role of AI in low-and middle-income countries (LMICs). In areas where medical resources and personnel are scarce, AI can reduce the workload on healthcare personnel (39). AI can also improve access to medical care, especially for areas where specialty care is not available (40). Disease outbreaks can be predicted earlier and allow for mobilization of treatment to affected areas. ML has been used to assess disease severity and predict treatment failure for illnesses such as malaria, tuberculosis and dengue fever (39). However, LMICs face significant challenges in implementing AI. The lack of electronic health records and health data is a limiting factor since this data is the primary input used in AI algorithms (40). Most AI systems are developed in high income countries (HICs) and the ML models reflect datasets from those populations. When applying these technologies to LMICs, models must be updated to reflect the population it is being applied to. Failure to do so can reinforce and exacerbate existing health disparities (40).Gaining both physician and patient trust in the integration of AI in healthcare remains a problem that needs to be addressed in the future. In a small study interviewing patients on their perspectives on AI in GP, subjects had mixed feelings on implementation of AI (41). A common concern amongst participants related to sharing of and access to their medical information (41). The patients wanted assurance that appropriate consent would be obtained prior to sharing of their data and that anonymization would be used. A survey of 203 participants on public opinion of AI in medicine also yielded mixed results, with a near 50/50 split when asked if they trust AI as a physician's tool for diagnosing medical conditions (42). In the same study, a majority of participants trusted a human physician over AI in making a culturally biased decision. There was a more positive outlook towards the future as over 25% of respondents believe AI will improve medical treatment in the next 10 years and nearly half of respondents for the next 50 years (42).Similarly, what AI lacks that physicians have is not intelligence, but rather wisdom -the sense of intuition that a human being can accumulate only over time (43). Can AI develop this intuition over time? Can an AI model mimic the human brain in synthesizing decades' worth of information to analyze a unique case and provide appropriate medical decision making? For simple cases, it likely can. But complex cases are a different story -risks and benefits of intervention must be weighed and complications must be predicted, all while delivering this information to the patient in an easilydigestible manner. Yet another layer of nuance is added when shared-decision making is introducednow, AI must understand patients' desires and uncertainties on a human level and incorporate that into its recommendations. Moreover, patients believe their physician should remain the primary decision maker, with AI can be used as a support tool (41). The use of AI in this setting may decrease time physicians spend on mundane tasks and leave more time for physicians to have meaningful conversations with their patients, facilitating the delivery of compassionate care (37,44). Once again, compassion and trust are key components to patient care for all patients alike.While AI has the potential to increase access to care for vulnerable populations and help bridge gaps in healthcare, we must ensure that data and algorithms are inclusive of patients to avoid worsening existing disparities. The healthcare system hinges on trust to maintain patient confidentiality, recommend the optimal course of action, and execute the plan appropriately. Particularly in marginalized communities, the critical process of building and maintaining this trust has proved difficult even in the absence of AI and continues to pose a significant obstacle in the success of AI to improve healthcare delivery. Both physicians and patients alike do not wish to see AI replace the standard physician-patient interaction. Instead, AI can serve as an adjunct to improve quality of care by reducing the chance of human error. Collaboration among patients, physicians, and AI developers is essential to achieve this goal in an equitable manner.
Keywords: artificial intelligence, Ethics, social determinants of health, SDH, health inequalities, technology in healthcare, social disparities in health
Received: 16 Dec 2024; Accepted: 21 Feb 2025.
Copyright: © 2025 Li, Parikh and Costa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Ana Costa, Department of Anesthesiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, 11794, New York, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.