Skip to main content

GENERAL COMMENTARY article

Front. Med., 24 November 2021
Sec. Regulatory Science

Commentary: Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent?

\nMichael D. Abramoff,
Michael D. Abramoff1,2*Zachary MortensenZachary Mortensen1Chris TavaChris Tava2
  • 1Deartments of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
  • 2Digital Diagnostics Inc., Coralville, Iowa City, IA, United States

A Commentary on
Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent?

by Ursin, F., Timmermann, C., Orzechowski, M., and Steger, F. (2021). Front. Med. 8:695217. doi: 10.3389/fmed.2021.695217

Introduction

The recent paper by Ursin et al. (1) brings up crucial issues about the ethics of healthcare AI and more specifically autonomous AI. These issues include the responsibilities and liabilities of disclosing information to patients. We appreciate the authors illustrating these issues with IDx-DR, which as the first autonomous AI approved by US Food and Drugs Administration (FDA), crystallized so many issues around “a computer making a medical decision,” as the authors carefully point out.

Ethics in Healthcare Ai

During the development, validation, and implementation of IDx-DR (Digital Diagnostics Inc), we started with an ethical framework built on the principles of non-maleficence, autonomy, and justice, which continue to be developed in various publications (24). This framework made it possible to track metrics around safety, equity, efficiency, transparency, validability, and accountability, allowing AI to be done the right way. This has led to validation of this biomarker-based AI under FDA oversight, inclusion in standards of care. An important milestone was reimbursement at the $55 level by publicly funded insurance in the United States. This required an understanding of the value of “autonomous AI work” by all stakeholders, and has led to rapid and increasingly widespread implementation (57). The ethical framework thus continues to serve all stakeholders well, as we continue to jointly develop considerations and requirements for healthcare AI.

In this context, it is interesting to contrast healthcare autonomous AI, to another type of digital technology, social media. Healthcare autonomous AI was grounded from the start in an ethical framework, and the technology stack was then built according to this framework. Social media, instead, started with the tech, and only now, almost two decades later, are we starting to grapple with its ethical consequences.

Clarifying Several Issues of Note

a. IDx-DR is a fully autonomous AI system. While the authors refer to IDx-DR as “AI-aided DR diagnosis,” it is in fact a fully autonomous AI system, as explained above. As a consequence, Digital Diagnostics assumes liability for the performance of the AI, as is now also required by the American Medical Association's AI Policy (8). We remain convinced that clarifying this liability issue helps foster acceptance by physicians and other stakeholders.

b. AI bias. The authors rightfully bring up the problem of undesirable bias, including racial and ethnic bias. In Digital Diagnostics' ethical framework, including metrics for equity, we recognize that the bias problem applies to the entire AI lifecycle. This includes choice of disease and disease severity to be diagnosed, AI algorithm design, including the use of priors such as biomarkers, instead of prior-less blank slate black box algorithms, the distribution of the training sets, rigorous validation for improved outcome metrics including equity, and the choice of where it is implemented after regulatory approval (2, 9). As illustrated in these studies, IDx-DR is a biomarker based AI system, and explicitly not a black box system, avoiding the latter's' many risks, including catastrophic failure and risk of bias (1012).

c. Patient informed consent. The authors are correct that informed consent of patients, notifying them that an AI will be used, should be considered. For IDx-DR, both operators of the AI system, as well as the physicians ordering it, are trained in how to discuss the use of IDx-DR with patients. In fact, Digital Diagnostics has developed an AI facts label as part of the diagnostic output, so as to maximize transparency about which AI algorithms are used, their accuracy, and the relevant scientific evidence of their use and benefit.

d. CE Certification. Finally, IDx-DR was for autonomous use in the European Economic Area per its CE Certificate (13) and complies with GDPR Article 22.

Author Contributions

Ethics in healthcare AI section attributed to MDA. Section A: IDx-DR is a fully autonomous AI system was written by CT and ZM. Section B: AI bias was written by MDA and CT. Section C: Patient informed consent was written by MDA and ZM. Section D: CE certification was written by MDA. All authors contributed to the article and approved the submitted version.

Funding

This work was supported in part by the Robert C. Watzke MD Professorship (to MDA) and Research to Prevent Blindness, Inc., New York, New York (unrestricted grant to the Department of Ophthalmology, and Visual Sciences, University of Iowa.

Conflict of Interest

MDA is a founder, executive chairman, consultant, investor, and shareholder of Digital Diagnostics. CT is a shareholder of Digital Diagnostics.

The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing diabetic retinopathy with artificial intelligence: what information should be included to ensure ethical informed consent? Front Med. (2021) 8:1108. doi: 10.3389/fmed.2021.695217

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Abramoff MD, Tobey D, Char DS. Lessons learned about autonomous AI: finding a safe, efficacious, and ethical path through the development process. Am J Ophthalmol. (2020) 214:134–42. doi: 10.1016/j.ajo.2020.02.022

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Char DS, Abràmoff MD, Feudtner C. Identifying ethical considerations for machine learning healthcare applications. Am J Bioethics. (2020) (2020) 20:7–17. doi: 10.1080/15265161.2020.1819469

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Abramoff MD, Cunningham B, Patel B, Eydelman MB, Leng T, Sakamoto T, et al. Foundational Considerations for Artificial Intelligence. Ophthalmology. (2021). doi: 10.1016/j.ophtha.2021.08.023. [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

5. American Diabetes A. 11. Microvascular complications and foot care: standards of medical care in diabetes-2020. Diabetes Care. (2020) 43(Suppl. 1):S135–51. doi: 10.2337/dc20-S011

PubMed Abstract | CrossRef Full Text | Google Scholar

6. U.S. Food & Drug Administration (FDA). FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-Related Eye Problems. (2018). Available online at: https://www.fda.gov/newsevents/newsroom/pressannouncements/ucm604357.htm

Google Scholar

7. Centers for Medicare & Medicaid Services. Proposal to Establish Values for Remote Retinal Imaging (CPT Code 92229). (2021). 56ff. Available online at: https://public-inspection.federalregister.gov/2021-14973.pdf

Google Scholar

8. American Medical Association (AMA) Board of Trustees Policy Summary. Augmented Intelligence in Healthcare. (2019). Available online at: https://www.ama-assn.org/system/files/2019-08/ai-2018-board-policy-summary.pdf

Google Scholar

9. Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Nat Digital Med. (2018) 1:39. doi: 10.1038/s41746-018-0040-6

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Finlayson SG Bowers JD Ito J Zittrain JL Beam AL Kohane IS. Adversarial attacks on medical machine learning. Science. (2019) 363:1287–9. doi: 10.1126/science.aaw4399

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Shah A, Lynch S, Niemeijer M, Amelon R, Clarida W, Folk J, et al. Susceptibility to misdiagnosis of adversarial images by deep learning based retinal image analysis algorithms. In: Proceedings – International Symposium on Biomedical Imaging (2018).

Google Scholar

12. Lynch S, Abramoff MD. Catastrophic failure in image-based convolutional neural network algorithms for detecting diabetic retinopathy. IOVS. (2017) 58:3776.

Google Scholar

13. UL CE Mark Certification. Certificate of Registration. (2018). Available online at: https://database.ul.com/certs/PDWS.A18142.pdf

Google Scholar

Keywords: healthcare, autonomous, artificial intelligence, informed consent, patient outcome, equity, bias

Citation: Abramoff MD, Mortensen Z and Tava C (2021) Commentary: Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent? Front. Med. 8:765936. doi: 10.3389/fmed.2021.765936

Received: 27 August 2021; Accepted: 03 November 2021;
Published: 24 November 2021.

Edited by:

Hossein Rabbani, Isfahan University of Medical Sciences, Iran

Reviewed by:

Zahra Amini, Isfahan University of Medical Sciences, Iran
Cristian Timmermann, University of Ulm, Germany
Frank Ursin, Hannover Medical School, Germany

Copyright © 2021 Abramoff, Mortensen and Tava. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Michael D. Abramoff, michael-abramoff@uiowa.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.