The final, formatted version of the article will be published soon.
POLICY AND PRACTICE REVIEWS article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 7 - 2024 |
doi: 10.3389/frai.2024.1400732
Navigating the Unseen Peril: Safeguarding Medical Imaging in the Age of AI
Provisionally accepted- 1 Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, United States
- 2 GYN Studios, Boston, United States
- 3 University of Konstanz, Konstanz, Baden-Württemberg, Germany
- 4 Johns Hopkins Medicine, Johns Hopkins University, Baltimore, Maryland, United States
- 5 Department of Radiology and Radiological Sciences, The Johns Hopkins Hospital, Johns Hopkins Medicine, Baltimore, Maryland, United States
- 6 SANS Institute, Rockville, United States
- 7 Department of Oncology, Johns Hopkins Medicine, Johns Hopkins University, Baltimore, Maryland, United States
In response to the increasing significance of artificial intelligence (AI) in healthcare, there has been increased attention -including a Presidential executive order to create an AI Safety Institute -to the potential threats posed by AI. While much attention has been given to the conventional risks AI poses to cybersecurity, and critical infrastructure, here we provide an overview of some unique challenges of AI for the medical community. Above and beyond obvious concerns about vetting algorithms that impact patient care, there are additional subtle yet equally important things to consider: the potential harm AI poses to its own integrity and the broader medical information ecosystem. Recognizing the role of healthcare professionals as both consumers and contributors to AI training data, this article advocates for a proactive approach in understanding and shaping the data that underpins AI systems, emphasizing the need for informed engagement to maximize the benefits of AI while mitigating the risks.Just days after President Biden signed an executive order to protect against threats posed by AI, Vice President Harris announced the formation of an AI Safety Institute, noting that AI also has the potential to cause profound harm. So far, most of the discussion around AI safety has focused around the most obvious risks in areas of biotechnology, cybersecurity, and critical infrastructure (Biden, 2023). However, there are several risks of AI that are unique to the medical community. Cybersecurity threats are already an issue for health care delivery and have already negatively impacted patient care (Neprash et al., 2023). Even cyberattacks that do not adversely impact patient care are costly -for example, the Irish National Orthopedic Register was able to avoid impacting patient care by reverting to a paper-only system, but at a cost of 2850 additional person-hours for data reconciliation. (Russell et al., 2023) The frequency and Style Definition: Normal (Web)
Keywords: medical imaging, artificial intelligence, data quality, Precision Medicine, Data bias
Received: 14 Mar 2024; Accepted: 13 Nov 2024.
Copyright: © 2024 Maertens, Brykman, Hartung, Gafita, Bai, Hoelzer, Skoudis and Paller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Channing Judith Paller, Department of Oncology, Johns Hopkins Medicine, Johns Hopkins University, Baltimore, 21218, Maryland, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.