
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
OPINION article
Front. Artif. Intell. , 06 March 2025
Sec. Medicine and Public Health
Volume 8 - 2025 | https://doi.org/10.3389/frai.2025.1545869
Artificial intelligence (AI) has permeated many aspects of daily life, including medicine, in recent years. As of 2021, 343 AI-enabled medical devices had been approved by the United States (US) Food and Drug Administration, with many more in development (Badal et al., 2023). Most notable thus far has been AI's ability to assist with every step of radiology workflow: it can determine the appropriateness of imaging, recommend the most appropriate imaging exam, predict wait times or appointment delays, and interpret imaging, with much more potential utilizations (Syed and Zoga, 2018). The World Health Organization proposed that AI tools be integrated into healthcare to improve efficiency and achieve sustainable health-related development (World Health Organization, 2021). AI in healthcare can reduce costs and administrative burdens, reduce waiting times for patients to receive care, improve diagnostic abilities and patient care, facilitate data management, and expedite discovery (Botha et al., 2024a,b).
However, the advancement of AI in healthcare comes with unique drawbacks. For example, data security and privacy are at risk and must be improved, as patients may more readily and unknowingly provide consent for covert data collection methods (Khan et al., 2023; He et al., 2019). Use of AI must be seriously reconsidered if it poses a risk to patient confidentiality, a non-negotiable in healthcare. With the ability of AI to rapidly gather and analyze large amounts of patient data, controlling the scope of its use becomes a challenge: these tools may progress to collect and disclose data without patient consent or direct investigator oversight (Botha et al., 2024a). In addition, as most healthcare-based AI research has been conducted in non-clinical settings, rolling out AI in certain clinical settings may result in non-evidence-based practice (Khan et al., 2023). For example, clinicians may feel tempted to use AI for tasks beyond their validation, and training data may not adequately represent the scenarios clinicians encounter (Nilsen et al., 2024). In fact, many studies on AI in healthcare have been administered in non-clinical settings (Botha et al., 2024a).
That is not to say AI should not be used in healthcare. It does, however, require immense consideration in how it is designed and why it is utilized. Some have contended that a goal of developing AI for healthcare should be to minimize health disparities and make the healthcare system more equitable (Badal et al., 2023; Campbell et al., 2021). Yet, many characteristics of AI make this goal difficult to achieve. As such, there is a growing body of literature that discusses AI's role in both closing and perpetuating these inequalities (Celi et al., 2022; d'Elia et al., 2022; Ali et al., 2023). As the ability of AI is directly proportional to the quality of the training sets used, authors have addressed concerns regarding bias in training datasets and lack of diversity in development teams ultimately resulting in AI-driven disparities in care (Botha et al., 2024a; Green et al., 2024; Ferrara, 2024; Haider et al., 2024).
This article draws from existing literature to add to the ongoing conversation about the implications of AI in healthcare disparities. Specifically, we discuss economic implications, the explainability of AI systems, and the importance of compassionate care. Ultimately, while AI may indeed confer benefits to the healthcare system, it remains far from the goal of closing healthcare disparities and may, instead, backfire.
One essential consideration in any kind of social disparity is economics. The US is notorious for having the highest healthcare expenditure globally, with healthcare costing $3.5 trillion, or 17.9% of the Gross Domestic Product (Khanna et al., 2022). Any measure to decrease this economic burden—either in the US or internationally—may be attractive. AI has the potential to save billions in annual healthcare costs (Zhu et al., 2024). AI may greatly streamline workflow, even in non-clinical tasks. An automated system may alleviate administrative burdens such as scheduling patients, estimating wait times, and billing insurance companies (Syed and Zoga, 2018; Zhu et al., 2024; Knight et al., 2023). Such workflow optimization may reduce the cost of healthcare delivery by cutting out intermediaries that typically handle these mundane tasks. In turn, patients' financial responsibility related to their care may be reduced.
On the clinical side, AI may be used to screen for and diagnose conditions, stratify disease risk, and devise treatment plans (Khanna et al., 2022). It may significantly reduce medical errors and factors that are associated with adverse outcomes (Botha et al., 2024a). Eventually, as technology advances, it may even perform procedures, given that it is deemed ethical, safe and evidence-based. While these benefits may seem like simply a perk to those practicing in physician-rich areas, they could become indispensable to those in areas affected by shortages of medical professionals (Lamem et al., 2025). Urban and rural communities bear the brunt of this inequity, with many struggling to access both primary and specialty care (Kirch and Petelle, 2017). It has been estimated that by 2030, there may be a shortage of up to 104,900 physicians in the US (Kirch and Petelle, 2017). As such, AI implementation in these underserved populations may help to alleviate these challenges and improve disparities regarding access to care (Lamem et al., 2025). Furthermore, AI assistants may help decrease physician burnout and therefore improve quality of care (Lin et al., 2019).
These advantages of AI are conferred only with the proper development, installation and maintenance of these systems. AI requires immense investment. One model for an AI glaucoma screening tool in the Changjiang county in China estimated that the fifteen-year accumulated incremental cost of using this tool was $434,903.20 for ~2,000 patients (Xiao et al., 2021). While the costs of this screening tool are arguably worth early detection and reduced disease progression, it may be impractical to roll out to larger populations. Health institutions in wealthy countries may easily make this investment. But what about institutions in developing countries? Community hospitals with limited government funding? Practices in rural areas with less purchasing power? Even if analyses demonstrate that costs are saved in the long run, the upfront investment may be too large an obstacle (Khanna et al., 2022).
Once a system is developed, purchased, and installed, maintenance becomes another issue. Software updates, advanced computing technologies, and ever-increasing cloud storage requirements add to costs (Badal et al., 2023). The evolving cybersecurity needs to protect patient health information may create further barriers to widespread application of AI (Shah et al., 2023). These all-around cost barriers are more nuanced than the mere ability to implement AI in practice. Inevitably, there are AI algorithms with higher and lower levels of sophistication, infrastructures that are more and less robust, and security measures that are stronger and weaker. The AI system that institutions choose will be closely tied to their financial status. Of course, AI development will then leave behind under-resourced communities.
Currently, there is a lack of “explainable” AI regarding algorithms and data sets that play a role in decision making (Amann et al., 2020). In other words—exactly how do these technologies work? How are they making these decisions? These are questions that even developers themselves cannot answer; we know they work, yet nobody can fully explain how. This “black box” of AI holds important implications to healthcare disparities worldwide. Machine learning (ML) is a component of AI which involves automated decision making based on datasets (Tack, 2019). Detecting and correcting biases based on limited training sets is an ethical prerequisite of justice in AI- and ML-based clinical decision-making (Kirch and Petelle, 2017). In other words, explainable AI enables developers to identify and correct training set-based biases that currently skew algorithms (Celi et al., 2022; Green et al., 2024).
The discussion of justice behind explainable AI requires additional considerations. Explainable AI models keep model developers accountable for their work, as lack of accountability precedes error (Amann et al., 2020). This concern is compounded by the fact that patients who are less literate are less likely to ask questions or seek more information about their care (Katz et al., 2007). Since these patients may be less prepared to participate in shared decision making, they may not challenge questionable decisions (Keij et al., 2021).
AI should be treated as a tool to support decision-making, not one to make decisions independently. For example, AI prescription systems have been developed to aid physician workflow and prevent human error (Tantray et al., 2024; Tully, 2012). Inevitably, physicians will encounter scenarios in which the AI recommendation conflicts with their clinical judgement. Some of these scenarios may arise if AI systems are not trained on datasets that adequately represent the populations they treat, thereby generating recommendations poorly aligned with the realities of patients' needs (Botha et al., 2024a). This challenge is particularly relevant to minority communities that have historically been under-studied (Haider et al., 2024). Healthcare providers should critically assess the AI recommendation in the context of their clinical experience and patient preferences. Institutions should establish clear policies on how to accept or reject AI suggestions to maintain quality patient care.
Justice in explainable AI systems is important also in that more transparent technologies will foster patient trust in providers and the healthcare system. Unexplainable, opaque models, on the other hand, may exacerbate the mistrust that already pervades the healthcare system. This mistrust is particularly prevalent in socially and economically marginalized communities (Jaiswal and Halkitis, 2019). A key component of trust in underprivileged populations is the patient's comfort with the physician and physicians' personal involvement in patient care (Gopichandran and Chetlapalli, 2013). As such, we may see that the unexplainable black box of AI and ML – if not handled correctly – would certainly exacerbate these concerns. Lack of explanation for these impersonal, automated algorithms may further alienate this vulnerable population and widen health disparities.
Even if we are to elucidate the black box, can AI ever replace the physician-patient relationship in delivering empathic care? Currently, it seems unlikely – one recent study demonstrated that healthcare chatbots delivering both empathetic and sympathetic responses to patients in fact lowered patients' perception of their authenticity (Seitz, 2024). In contrast, empathy and sympathy expressed by human physicians did not induce this negative effect (Seitz, 2024). This lack of perceived authenticity may not only undermine patients' subjective satisfaction with their AI providers but may also objectively worsen patient outcomes. While some AI tools provided sound biomedical recommendations for diabetes management, they overlooked psychosocial components that are also necessary for glycemic control (Romero-Brufau et al., 2020). Algorithms that determine A1c goals, calculate medication dosages, and send prescriptions may certainly help optimize patient care. However, recommendations poorly tailored to psychosocial challenges disproportionately affect those with greater social barriers. Continuing the stand-alone case of diabetes, for example, significant social barriers to care include having the ability to afford healthy food, the free time for follow-up visits and the literacy to understand health information (Paduch et al., 2017). Now combine this diabetes with a slew of other health conditions, medications, unemployment concerns and an ailing family member. Surely, physicians can manage this patient in countless different ways. There is no one correct path. Regardless, it is imperative that health providers—human or AI-based—address these concerns with compassion.
Palliative care, which emphasizes relieving suffering and optimizing quality of life in end-of-life care, is a field in which compassion is key (Adegbesan et al., 2024). While AI may help assist in decision-making, it risks depersonalizing cases and lacking empathy when patients and their families need it the most. Death and dying are often rooted in culture, personal beliefs, and spirituality. The experience is deeply personal and unique to each family (Adegbesan et al., 2024). Whereas some encourage open communication about death, others feel uncomfortable with it; whereas some value life-prolonging measures regardless of prognosis, others less so (Ohr et al., 2017). Palliative care AI models risk imposing a “one-size-fits-all“ model of care based on a Western training dataset (Adegbesan et al., 2024). Once again, understudied populations and cultural minorities fall behind in AI's “understanding”—or lack thereof—of their values.
Society at large, including regulators, policy makers, insurance companies, healthcare professionals and patients should carefully consider incorporating AI practices into the practice and business of medicine. Regulators have raised concerns over the need for regulation of clinical AI as well as generalizability to different populations (Hogg et al., 2023). Another area of concern relevant to several stakeholders including healthcare providers and regulators is legal responsibility for AI clinical decision making (Hogg et al., 2023). This fear exists for physicians in the scenario of a medical error made by AI and conversely for accusations of negligent care for not using AI. Physicians were neither prepared for nor agreed with assuming responsibility for errors made by AI, while AI developers believed they should not be liable since they do not practice medicine (Hogg et al., 2023). Each side felt they only understood “part of the whole” when it comes to AI, further highlighting the need for explainable AI. Appropriate oversight by policy makers and regulators is needed to ensure accountability and promote development of explainable AI. These risks may be further mitigated by informing patients that AI was involved in decision making (Lorenzini et al., 2023). Certain narratives have pitted AI as a rival to the skills and education of physicians, with claims that AI will one day replace physicians (Lorenzini et al., 2023). AI remains solely a tool to assist in clinical setting with the final decision being made by a human. Rhetoric that continues to pit AI against physicians will only hinder the incorporation of AI into clinical practice (Lorenzini et al., 2023). Patients will not benefit from AI replacing their physicians, but they also will not benefit from avoiding AI altogether.
In discussing the role of AI in closing healthcare disparities, we must consider the role of AI in low- and middle-income countries (LMICs). In areas where medical resources and personnel are scarce, AI can reduce the workload on healthcare personnel (Schwalbe and Wahl, 2020). AI can also improve access to medical care, especially for areas where specialty care is not available (López et al., 2022). Disease outbreaks can be predicted earlier and allow for mobilization of treatment to affected areas. ML has been used to assess disease severity and predict treatment failure for illnesses such as malaria, tuberculosis and dengue fever (Schwalbe and Wahl, 2020). However, LMICs face significant challenges in implementing AI. The lack of electronic health records and health data is a limiting factor since this data is the primary input used in AI algorithms (López et al., 2022). Most AI systems are developed in high income countries (HICs) and the ML models reflect datasets from those populations. When applying these technologies to LMICs, models must be updated to reflect the population it is being applied to. Failure to do so can reinforce and exacerbate existing health disparities (López et al., 2022).
Gaining both physician and patient trust in the integration of AI in healthcare remains a problem that needs to be addressed in the future. In a small study interviewing patients on their perspectives on AI in general medical practice, subjects had mixed feelings on implementation of AI (Mikkelsen et al., 2023). A common concern amongst participants related to sharing of and access to their medical information (Mikkelsen et al., 2023). The patients wanted assurance that appropriate consent would be obtained prior to sharing of their data and that anonymization would be used. A survey of 203 participants on public opinion of AI in medicine also yielded mixed results, with a near 50/50 split when asked if they trust AI as a physician's tool for diagnosing medical conditions (Rojahn et al., 2023). In the same study, a majority of participants trusted a human physician over AI in making a culturally biased decision. There was a more positive outlook toward the future as over 25% of respondents believe AI will improve medical treatment in the next 10 years and nearly half of respondents for the next 50 years (Rojahn et al., 2023).
Similarly, what AI lacks that physicians have is not intelligence, but rather wisdom—the sense of intuition that a human being can accumulate only over time (Powell, 2019). Can AI develop this intuition over time? Can an AI model mimic the human brain in synthesizing decades' worth of information to analyze a unique case and provide appropriate medical decision making? For simple cases, it likely can. But complex cases are a different story—risks and benefits of intervention must be weighed and complications must be predicted, all while delivering this information to the patient in an easily-digestible manner. Yet another layer of nuance is added when shared-decision making is introduced—now, AI must understand patients' desires and uncertainties on a human level and incorporate that into its recommendations. Moreover, patients believe their physician should remain the primary decision maker, with AI can be used as a support tool (Mikkelsen et al., 2023). The use of AI in this setting may decrease time physicians spend on mundane tasks and leave more time for physicians to have meaningful conversations with their patients, facilitating the delivery of compassionate care (Hogg et al., 2023; Sauerbrei et al., 2023). Once again, compassion and trust are key components to patient care for all patients alike.
While AI has the potential to increase access to care for vulnerable populations and help bridge gaps in healthcare, we must ensure that data and algorithms are inclusive of patients to avoid worsening existing disparities. The healthcare system hinges on trust to maintain patient confidentiality, recommend the optimal course of action, and execute the plan appropriately. Particularly in marginalized communities, the critical process of building and maintaining this trust has proved difficult even in the absence of AI and continues to pose a significant obstacle in the success of AI to improve healthcare delivery. Both physicians and patients alike do not wish to see AI replace the standard physician-patient interaction. Instead, AI can serve as an adjunct to improve quality of care by reducing the chance of human error. Collaboration among patients, physicians, and AI developers is essential to achieve this goal in an equitable manner.
DL: Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing. SP: Conceptualization, Supervision, Writing – review & editing. AC: Conceptualization, Supervision, Writing – review & editing.
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author(s) declare that no Gen AI was used in the creation of this manuscript.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Adegbesan, A., Akingbola, A., Ojo, O., Jessica, O. U., Alao, U. H., Shagaya, U., et al. (2024). Ethical challenges in the integration of artificial intelligence in palliative care. J. Med. Surg. Public Health 4:100158. doi: 10.1016/j.glmedi.2024.100158
Ali, M. R., Lawson, C. A., Wood, A. M., and Khunti, K. (2023). Addressing ethnic and global health inequalities in the era of artificial intelligence healthcare models: a call for responsible implementation. J. R. Soc. Med. 116, 260–262. doi: 10.1177/01410768231187734
Amann, J., Blasimme, A., Vayena, E., Frey, D., and Madai, V. I. (2020). The precise Qc. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 20:310. doi: 10.1186/s12911-020-01332-6
Badal, K., Lee, C. M., and Esserman, L. J. (2023). Guiding principles for the responsible development of artificial intelligence tools for healthcare. Commun. Med. 3:47. doi: 10.1038/s43856-023-00279-9
Botha, N. N., Ansah, E. W., Segbedzi, C. E., Dumahasi, V. K., Maneen, S., Kodom, R. V., et al. (2024a). Artificial intelligent tools: evidence-mapping on the perceived positive effects on patient-care and confidentiality. BMC Digit. Health 2:33. doi: 10.1186/s44247-024-00091-y
Botha, N. N., Segbedzi, C. E., Dumahasi, V. K., Maneen, S., Kodom, R. V., Tsedze, I. S., et al. (2024b). Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety. Arch. Public Health 82:188. doi: 10.1186/s13690-024-01414-1
Campbell, J. P., Mathenge, C., Cherwek, H., Balaskas, K., Pasquale, L. R., Keane, P. A., et al. (2021). Artificial intelligence to reduce ocular health disparities: moving from concept to implementation. Transl. Vis. Sci. Technol. 10 :19. doi: 10.1167/tvst.10.3.19
Celi, L. A., Cellini, J., Charpignon, M-. L., Dee, E. C., Dernoncourt, F., Eber, R., et al. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digit. Health 1:e0000022. doi: 10.1371/journal.pdig.0000022
d'Elia, A., Gabbay, M., Rodgers, S., Kierans, C., Jones, E., Durrani, I., et al. (2022). Artificial intelligence and health inequities in primary care: a systematic scoping review and framework. Fam. Med. Community Health. 10:1670. doi: 10.1136/fmch-2022-001670
Ferrara, E. (2024). Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Sci. 6:3. doi: 10.3390/sci6010003
Gopichandran, V., and Chetlapalli, S. K. (2013). Dimensions and determinants of trust in health care in resource poor settings–a qualitative exploration. PLoS ONE 8:e69170. doi: 10.1371/journal.pone.0069170
Green, B. L., Murphy, A., and Robinson, E. (2024). Accelerating health disparities research with artificial intelligence. Front. Digit. Health 6:1330160. doi: 10.3389/fdgth.2024.1330160
Haider, S. A., Borna, S., Gomez-Cabello, C. A., Pressman, S. M., Haider, C. R., Forte, A. J., et al. (2024). The algorithmic divide: a systematic review on ai-driven racial disparities in healthcare. J. Racial Ethn. Health Disparit. doi: 10.1007/s40615-024-02237-0. [Epub ahead of print].
He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., Zhang, K., et al. (2019). The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 25, 30–36. doi: 10.1038/s41591-018-0307-0
Hogg, H. D. J., Al-Zubaidy, M., Talks, J., Denniston, A. K., Kelly, C. J., Malawana, J., et al. (2023). Stakeholder perspectives of clinical artificial intelligence implementation: systematic review of qualitative evidence. J. Med. Internet Res. 25:e39742. doi: 10.2196/39742
Jaiswal, J., and Halkitis, P. N. (2019). Towards a more inclusive and dynamic understanding of medical mistrust informed by science. Behav. Med. 45, 79–85. doi: 10.1080/08964289.2019.1619511
Katz, M. G., Jacobson, T. A., Veledar, E., and Kripalani, S. (2007). Patient literacy and question-asking behavior during the medical encounter: a mixed-methods analysis. J. Gen. Intern. Med. 22, 782–786. doi: 10.1007/s11606-007-0184-6
Keij, S. M., van Duijn-Bakker, N., Stiggelbout, A. M., and Pieterse, A. H. (2021). What makes a patient ready for Shared Decision Making? A qualitative study. Patient Educ. Couns. 104, 571–577. doi: 10.1016/j.pec.2020.08.031
Khan, B., Fatima, H., Qureshi, A., Kumar, S., Hanan, A., Hussain, J., et al. (2023). Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed. Mater. Dev. 1, 731–738. doi: 10.1007/s44174-023-00063-2
Khanna, N. N., Maindarkar, M. A., Viswanathan, V., Fernandes, J. F. E., Paul, S., Bhagawati, M., et al. (2022). Economics of artificial intelligence in healthcare: diagnosis vs. treatment. Healthcare 10:2493. doi: 10.3390/healthcare10122493
Kirch, D. G., and Petelle, K. (2017). Addressing the physician shortage: the peril of ignoring demography. JAMA. 317, 1947–1948. doi: 10.1001/jama.2017.2714
Knight, D. R. T., Aakre, C. A., Anstine, C. V., Munipalli, B., Biazar, P., Mitri, G., et al. (2023). Artificial intelligence for patient scheduling in the real-world health care setting: a metanarrative review. Health Policy Technol. 12:100824. doi: 10.1016/j.hlpt.2023.100824
Lamem, M. F. H., Sahid, M. I., and Ahmed, A. (2025). Artificial intelligence for access to primary healthcare in rural settings. J. Med. Surg. Public Health 5:100173. doi: 10.1016/j.glmedi.2024.100173
Lin, S. Y., Mahoney, M. R., and Sinsky, C. A. (2019). Ten ways artificial intelligence will transform primary care. J. Gen. Intern. Med. 34, 1626–1630. doi: 10.1007/s11606-019-05035-1
López, D. M., Rico-Olarte, C., Blobel, B., and Hullin, C. (2022). Challenges and solutions for transforming health ecosystems in low- and middle-income countries through artificial intelligence. Front. Med. 9:958097. doi: 10.3389/fmed.2022.958097
Lorenzini, G., Arbelaez Ossa, L., Shaw, D. M., and Elger, B. S. (2023). Artificial intelligence and the doctor-patient relationship expanding the paradigm of shared decision making. Bioethics 37, 424–429. doi: 10.1111/bioe.13158
Mikkelsen, J. G., Sørensen, N. L., Merrild, C. H., Jensen, M. B., and Thomsen, J. L. (2023). Patient perspectives on data sharing regarding implementing and using artificial intelligence in general practice - a qualitative study. BMC Health Serv. Res. 23:335. doi: 10.1186/s12913-023-09324-8
Nilsen, P., Sundemo, D., Heintz, F., Neher, M., Nygren, J., Svedberg, P., et al. (2024). Towards evidence-based practice 2.0: leveraging artificial intelligence in healthcare. Front. Health Serv. 4:1368030. doi: 10.3389/frhs.2024.1368030
Ohr, S., Jeong, S., and Saul, P. (2017). Cultural and religious beliefs and values, and their impact on preferences for end-of-life care among four ethnic groups of community-dwelling older persons. J. Clin. Nurs. 26, 1681–1689. doi: 10.1111/jocn.13572
Paduch, A., Kuske, S., Schiereck, T., Droste, S., Loerbroks, A., Sørensen, M., et al. (2017). Psychosocial barriers to healthcare use among individuals with diabetes mellitus: a systematic review. Prim. Care Diabetes. 11, 495–514. doi: 10.1016/j.pcd.2017.07.009
Powell, J. (2019). Trust Me, I'm a Chatbot: How Artificial Intelligence in Health Care Fails the Turing Test. J. Med. Internet Res. 21, e16222. doi: 10.2196/16222
Rojahn, J., Palu, A., Skiena, S., and Jones, J. J. (2023). American public opinion on artificial intelligence in healthcare. PLoS ONE 18:e0294028. doi: 10.1371/journal.pone.0294028
Romero-Brufau, S., Wyatt, K. D., Boyum, P., Mickelson, M., Moore, M., Cognetta-Rieke, C. A., et al. (2020). lesson in implementation: a pre-post study of providers' experience with artificial intelligence-based clinical decision support. Int. J. Med. Inform. 137:104072. doi: 10.1016/j.ijmedinf.2019.104072
Sauerbrei, A., Kerasidou, A., Lucivero, F., and Hallowell, N. (2023). The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med. Inform. Decis. Mak. 23:73. doi: 10.1186/s12911-023-02162-y
Schwalbe, N., and Wahl, B. (2020). Artificial intelligence and the future of global health. Lancet 395, 1579–1586. doi: 10.1016/S0140-6736(20)30226-9
Seitz, L. (2024). Artificial empathy in healthcare chatbots: Does it feel authentic? Comp. Human Behav. 2:100067. doi: 10.1016/j.chbah.2024.100067
Shah, C., Nachand, D., Wald, C., and Chen, P-. H. (2023). Keeping patient data secure in the age of radiology artificial intelligence: cybersecurity considerations and future directions. J. Am. College Radiol. 20, 828–35. doi: 10.1016/j.jacr.2023.06.023
Syed, A. B., and Zoga, A. C. (2018). Artificial intelligence in radiology: current technology and future directions. Semin. Musculoskelet. Radiol. 22, 540–545. doi: 10.1055/s-0038-1673383
Tack, C. (2019). Artificial intelligence and machine learning | applications in musculoskeletal physiotherapy. Musculoskelet Sci Pract. 39, 164–169. doi: 10.1016/j.msksp.2018.11.012
Tantray, J., Patel, A., Wani, S. N., Kosey, S., and Prajapati, B. G. (2024). Prescription precision: a comprehensive review of intelligent prescription systems. Curr. Pharm. Des. 30, 2671–2684. doi: 10.2174/0113816128321623240719104337
Tully, M. P. (2012). Prescribing errors in hospital practice. Br. J. Clin. Pharmacol. 74, 668–675. doi: 10.1111/j.1365-2125.2012.04313.x
World Health Organization (2021). The Importance of Ethics in Artificial Intelligence. Geneva: World Health Organization.
Xiao, X., Xue, L., Ye, L., Li, H., and He, Y. (2021). Health care cost and benefits of artificial intelligence-assisted population-based glaucoma screening for the elderly in remote areas of China: a cost-offset analysis. BMC Public Health 21:1065. doi: 10.1186/s12889-021-11097-w
Keywords: artificial intelligence, ethics, social determinants of health, SDH, health inequalities, technology in healthcare, social disparities in health
Citation: Li DM, Parikh S and Costa A (2025) A critical look into artificial intelligence and healthcare disparities. Front. Artif. Intell. 8:1545869. doi: 10.3389/frai.2025.1545869
Received: 16 December 2024; Accepted: 21 February 2025;
Published: 06 March 2025.
Edited by:
Orsolya Edit Varga, University of Debrecen, HungaryReviewed by:
Muhammad Shahid Iqbal, Anhui University, ChinaCopyright © 2025 Li, Parikh and Costa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ana Costa, YW5hLmNvc3RhQHN0b255YnJvb2ttZWRpY2luZS5lZHU=
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.