Skip to main content

MINI REVIEW article

Front. Digit. Health, 30 May 2024
Sec. Human Factors and Digital Health

Digital pathology implementation in cancer diagnostics: towards informed decision-making

\r\nOksana Sulaieva,
Oksana Sulaieva1,2*Oleksandr DudinOleksandr Dudin1Olena KoshykOlena Koshyk1Mariia PankoMariia Panko1Nazarii Kobyliak,\r\nNazarii Kobyliak1,2
  • 1Medical Laboratory CSD, Kyiv, Ukraine
  • 2Endocrinology Department, Bogomolets National Medical University, Kyiv, Ukraine

Digital pathology (DP) has become a part of the cancer healthcare system, creating additional value for cancer patients. DP implementation in clinical practice provides plenty of benefits but also harbors hidden ethical challenges affecting physician-patient relationships. This paper addresses the ethical obligation to transform the physician-patient relationship for informed and responsible decision-making when using artificial intelligence (AI)-based tools for cancer diagnostics. DP application allows to improve the performance of the Human-AI Team shifting focus from AI challenges towards the Augmented Human Intelligence (AHI) benefits. AHI enhances analytical sensitivity and empowers pathologists to deliver accurate diagnoses and assess predictive biomarkers for further personalized treatment of cancer patients. At the same time, patients’ right to know about using AI tools, their accuracy, strengths and limitations, measures for privacy protection, acceptance of privacy concerns and legal protection defines the duty of physicians to provide the relevant information about AHI-based solutions to patients and the community for building transparency, understanding and trust, respecting patients' autonomy and empowering informed decision-making in oncology.

1 Introduction

Pathology is essential in cancer diagnostics and guides further clinical decisions and patient management (1). The complexities and multimodality of cancer at the individual level complicate the relevant assessment of clinical and laboratory data, demographic and anthropometric features, lifestyle, personal and family history, tumor stage, histological and molecular type of tumor, its heterogeneity and evolution during the treatment during clinical decision-making. From this perspective, artificial intelligence (AI) was recognized to play an essential role in supporting health system management, especially in cancer care where AI-based solutions facilitate the assessment of public health threats, improve diagnostic access, shorten turnaround time and enhance the accuracy in cancerous lesions detection (2). Electronic health records, technological advancements and AI-based tools allow to minimize the existing constraints fostering accurate patient-specific data analysis and enabling precision medicine approaches for individualizing cancer care and improving patient outcomes (3, 4). Implementation of precision medicine and technological advances in pathology and genetics defined the shift from physical histological slide microscopy toward digital format facilitating digital pathology (DP) evolution (4).

The primary canon of DP is rooted in obtaining high-resolution whole slide images (WSI) for further observation, sharing, storage, and advanced analysis. Digital scanner implementation in the workflow has determined the development of the next step toward pathology laboratory automation and digitalization by addressing artificial intelligence (AI)-based approaches for WSI recognition of particular patterns for diagnostic, prognostic and prediction purposes (5). AI, Internet to Things (IoT) concept and emerging technologies transformed the routine pathology practice enabling assessment of WSI, their sharing for “second opinion” consultations, reducing human and infrastructural resource burden (6) and mining of multiple visual features for more accurate diagnostics, prognosis, prediction of response to various treatments and better patient outcomes (7). With the high-speed progress in cancer multi-omics studies, AI-based solutions have been facilitating the discovery of novel biomarkers and precision medicine implementation (1). AI-based tools in pathology were also essential for developing molecular classification of various cancers (TCGA project) and discovering novel approaches for predicting genetic alterations, and drug discovery for better treatment outcomes (7). Thus, the application of DP in convergence with AI-based tools enabling WSI analysis for diagnostic and prognostic purposes provides multiple benefits for both healthcare professionals and patients by empowering pathologists to deliver accurate diagnoses and assess predictive biomarkers for more personalized treatment (4). Although recent policy strategies in many developed countries directed funding to support the deployment of digital pathology, there are still gaps in the legal dimension and ethical concerns about ensuring patients' autonomy, protection and fair access to novel technologies (8).

The digital transformation of pathology impacts physician-patient relationships. The ethical dimensions of the physician-patient relationships are shaped by Beauchamp and Childress' bioethical principles and respect for autonomy, beneficence, non-maleficence, and justice (9). A growing body of studies addresses questions of the safety and quality of AI-based tools in relation to physicians' duties to promote patients' best interests (aligning with beneficence) and minimizing harm (according to non-maleficence imperative) (10, 11). Challenges related to physician use of AI-based tools include automation bias, technical limitations, data governance and resource allocation, fair access to novel technologies and transparency related to patients' autonomy and further decision-making (1214). Many ethical concerns about patient autonomy when using AI have been identified, including disclosure and informed consent process, privacy and security issues, which raise the need for transparency and building trust in AI-powered algorithm applications (4, 10). From this perspective, patient autonomy is often beyond the scope of ethical analysis because the incorporation of AI algorithms has changed not only workflow but also the essence of the diagnostic process in pathology laboratories. Only a few studies address the patient-centric approach to implementing AI-based tools in pathology (15). AI has been becoming an additional “participant” in diagnostic and therapeutic pathways affecting physician-patient relationships, accessing patient data and providing recommendations concerning sensitivity to various treatments. Implementing DP affects the patient's right to know about diagnostic and treatment pathways. Questions arise about whether patients be informed about the use of AI for diagnostics, what information should they know, and if patients should have a choice between traditional pathology and DP-supported diagnostics. How these questions are answered have an impact on patients' rights, protection, and autonomy.

This paper argues that physicians using digital and computational pathology tools for cancer diagnostics have an ethical obligation to transform the physician-patient relationship for enable informed and responsible decisions.

2 Ethical imperative for transparency of using DP tools

The importance of the transparency of using AI-supported solutions in pathology as they affect clinical decisions has reached a global convergence (15). Most biases concerning AI are related to low literacy and unfounded fears, so transparency can foster trust and define the responsible and relevant use of the innovation. Trust in novel technologies relies on proficiency and rises along with the experience. For instance, the Swedish report on using WSI in 2006–2017 demonstrated that only 38% of cases were diagnosed digitally, while in the rest of the cases, pathologists switched between digital and glass slides, reflecting the lack of confidence when signing out reports with DP (16, 17). However, in 2017, some studies demonstrated >90% acceptance of WSI by pathologists for diagnostic issues (18). Moreover, driven by the COVID-19 pandemic transition toward virtual activities fostered pathologists to network education and consultations using and sharing WSI. This defined an acceptance of virtual meetings and slides as a substitute for glass specimens for regular practice (19). Similarly, the first implementation of AI-based computational pathology tools for image analysis relied on explainability and causative approach (20). In 1995 FDA approved the first AI-supported medical device for supporting cytologists in recognizing abnormal cells in PAP smears, and since that time, AI-driven solutions have been playing a pivotal role in cervical screening. In 2023, ML algorithms was reported to outperform the standard clinical risk model for assessing 5-year risk of breast cancer development, underscoring AI's role in cancer management (21).

Nevertheless, transparency is essential for DP tool developers, policymakers, technology users (pathologists and clinicians), and patients whose data are handled by AI-powered solutions. Guarantee that AI-based solutions are transparent, unbiased, and ethically justifiable is paramount to maintaining the trust of various stakeholders (4). The depth of transparency at least partly relies on the type of machine learning (ML) methods applied (supervised, unsupervised or reinforced learning) (22). A choice among a wide range of algorithms within various ML platforms depends on the goal and tasks, the type and number of data to be analyzed and the size of the dataset, the learning approach, the level of accuracy, the need for clustering, hierarchical output or speed in data assessment, etc. Many ML algorithms do not reveal the principles and logic of the underlying decisions (23, 24). The “black box” paradox defines the obscure nature and the non-explainability of the unsupervised AI-based decisions that hinder the trust and stumble the widespread integrations of computational pathology tools in diagnostic settings. These challenges drove the development of an explainable AI (XAI) approach for enhancing the interpretability, explainability, justifiability, contestability and transparency of the applied AI-models (25).

Despite the exciting results of AI-based solutions development and testing under controlled conditions, the real-world application demonstrates discrepancies from the initial discoveries. The causes are rooted in various reasons including the training set variety and size, depth of data incorporated in the analysis, the dependence on technical parameters (scanner, processing, staining), quality of primary data annotation, etc. (26). Despite the impressive results, ML/AI applications can be undermined by common technical factors, including blur, tissue folds, tears, or color variations. This defined the need for normalization algorithms and color augmentation as well as the requirement to histopathology professionals to revalidate AI-based applications aligned with the preanalytical stage of workflow updates. A possible “replication crisis” in DP and the clinical harm of using unreliable AI-based solutions in practice dictated the need for proper oversight (27, 28).

To attain greater transparency, institutions that develop or deploy AI-based systems should enhance the disclosure of information, about AI-application benefits confirmed by real-world data and healthcare providers' experience (10). The following issues are typically discussed with stakeholders: use of AI (29), source code and data use, evidence about AI tools` performance and their limitations (30), legal regulations and oversight (10), data protection strategies (31), as well as communication with the community for building trust (10). Besides, SPIRIT-AI and CONSORT-AI guidelines for clinical trials using AI addressed the existing challenges and were extended to include additional requirements to clearly describe the intended use of the AI, indications for how to use the AI intervention in the clinical setting, details on the data inputs to train the AI tool, and the outputs it produces, descriptions of the identification and revision of the errors, as well as human-computer interplay (28). Thus, transparency is the prerequisite for the responsible development and implementation of computational pathology in practice.

3 Ethical requirements for privacy protection when using DP tools

Despite the Ethically Aligned Designed initiative led by the Institute of Electrical and Electronics Engineers (IEEE) for governing and standardizing AI-based technologies (32), and Digital Pathology Association activities to guide AI-based pathology practice ethically, gaps in DP ethical status and transparency to patients persist. Good Clinical Laboratory Practice (GCLP), ISO 15189 and Clinical Laboratory Improvement Amendments (CLIA) regulations dictate the need to ensure the ethical integrity of all procedures related to data acquisition and management in laboratories. WSI analysis falls under the specifications of these regulations (33). Patients must provide informed consent for their data use (including digital slides and associated personal information) therefore they must be informed about DP application (33, 34).

Using DP tools for sharing slides during the “second opinion” consultations addressing other experts' opinions when diagnosing cases of high complexity, also requires privacy protection following the federal Health Insurance Portability and Accountability Act (HIPAA) or the European General Data Protection Regulations (GDPR), which safeguards data protection for individuals within the USA or European Union (EU) respectively (35, 36). The EU's GDRP Regulation and OECD Artificial Intelligence Papers have already articulated the policies for increasing transparency of using AI in the Health sector. The most important recommendations include (1) public and provider engagement in discussing opportunities, risks, and concerns; (2) transparent reporting for AI performance and AI incidents, including impacts, lessons learned, and adjustments; (3) establishing rules about data control and (4) incentivizing and overseeing adherence to responsible AI practices and codes of conduct (2). Besides, the recently updated EU AI Act (37), also addresses the obligation for transparency for providers and users of AI. Although the strict limitations and overseeing health data sharing is justifiable from the subjects' protection perspective, these also restrain research in development of life-saving treatments and personalized medicine. Coping with this intrinsic conflict, the European Commission proposed the regulation to establish the European Health Data Space. The Commission articulated rules focused on two main goals: (1) to put citizens at the center of healthcare, giving them full control over their data to obtain better healthcare across the EU; (2) to open up data for research and public health (38).

Despite the regulations on protecting personal data and ensuring confidentiality in healthcare, the privacy risks related to WSI storage and sharing have not been completely clarified. WSI belongs to a specific data category, as a digital slide is just an image of high resolution in gigapixels or tens of gigapixels reflecting the detailed structure of tissue samples obtained, processed, cut, stained, and scanned in the laboratory. From this perspective, WSI could be categorized as low-risk data (39). However, the scan of the histological specimen can be labeled with a patient identifier and also linked to clinical or laboratory data, so WSI should be considered as sensitive as other clinical data. These factors justify the guidelines for releasing WSI as personal data that requires regulating data transfer and/or processing legally, pseudonymizing and minimizing the data set, safeguarding data via technical and organizational measures and considering information leakage from AI models (40). Thus, digital pathology systems that handle patient data associated with WSI should align with regulations to safeguard personal information when storing, transmitting, and allowing access to third parties, which requires the corresponding security measures to protect the integrity of patient data. However, there are still no articulated requirements for the list of data to be collected for proper analysis with respect to the clinical context of an individual`s case, characteristics of data storage capacity and the regulations for the WSI sharing with respect to the standardized annotations, etc. Lack of common legal and operational standards with respect to data input and protection, computational pathology algorithms will vary in quality, timeliness, cost, and unclear outcomes. Thus, standardization of computational pathology solutions development, validation and use will simplify transparency and trustworthiness of responsible AI-application.

On the other hand, GDPR requires that data subjects have a right to “meaningful information about the logic involved” in data analysis (15). This requires the disclosure of not only using AI-based tools but also some clarification about how AI decision-making works (41). The evolving nature of AI defines the continuous learning and emerging new properties that define both benefits (for enhancing diagnostic accuracy and widening predictive potential) and risks (for instance errors, and discrepancies in case of rare feature recognition) that require the proper risk assessment, post-implementation monitoring, overseeing and corresponding disclosure to patients, clinicians and community. The US Food and Drug Administration (FDA) and In Vitro Diagnostic Medical Device Directive (CE IVD) approved several algorithms for some predictive biomarkers assessment (such as ER, PR, HER-2 and Ki-67 expression in breast cancer, Glisson grading in prostate cancer, etc.) (42), regulating the clinical utility of DP. At the same time, the FDA proposes a regulatory amendment for AI/ML-based software to be considered as a medical device, transforming the perception of AI (FDA) (43). Similarly, in the European Union (EU), any AI-based devices and software declared to be used for diagnostics, disease prevention, monitoring and treatment are considered medical devices (44). To guide the use of software in clinical practice the FDA also proposed Good Machine Learning Practices (GMLPs), providing 10 basic principles to promote the safe and effective application of “medical devices that use artificial intelligence and machine learning (AI/ML)” (43). At the same time, GMLPs also address the Performance of the Human-AI Team. Such an approach drives the transformation of users' perception from Artificial intelligence challenges in the Augmented Human Intelligence (AHI) paradigm.

4 From artificial to Augmented Human Intelligence

A trend to move from the “technology-centric” approach toward humanization of using ML-based tools in pathology, mirrored in the concept of AHI has occurred. While some AI- algorithms are independent and operate autonomously, AHI uses ML to enhance human intelligence by providing actionable data. AHI efficiency relies on the synergy between human experience and ML ability to learn facilitating continuous improvement without additional programming. The transformation of DP core concept from AI toward AHI is based on the realistic assessment of the procedures and responsibilities during the diagnostic process in pathology (45). AHI relies on the use of computers and AI-based algorithms to facilitate and accelerate pathologists' role in image and data analysis (13). For example, the use of convoluted neural networks allows for the decomposition of a slide image and extraction of visual and subvisual features, typical for particular histological and molecular subtypes of tumors, which can be easily overlooked during human-based microscopy (46). Optimized AI-based biomarker assessment can save time and provide pathologists with accurate quantification of various protein expressions, increasing analytical sensitivity and helping to define high-risk patients and pre-select individuals for different therapies in line with the patients' best interests (47). Thus, AI application provides additional values for histopathological assessment.

AI-powered algorithms, however, do not provide uncontrolled autonomous decisions for their specific application. Fears that AI can replace pathologists and/or physicians are unsubstantiated. The AHI's central question is how AI can support and improve pathologists' work, providing additional benefits for precise diagnostics and predicting patient responses to therapy. Overall, AHI promises to enhance diagnostic accuracy in pathology and the performance of healthcare (45). Human-AI-teams seem to be more efficient in adjusting to rapidly changing treatment guidelines in oncology which incorporate various novel biomarkers and therapeutic agents. However, AHI applications should rely on verified validated models demonstrating high performance and reproducibility of not only machine learning algorithms alone but also AI-pathologists interplay. In this context, having “human in the loop,” addresses the need to assess the performance of the Human-AI team to consider the model`s output interpretability and the responsibility of human experts (15). Clear roles of AI and pathologists must be established to guide, assess, validate, and interpret the results of slide analysis from the perspective of responsibility and accountability to patients (45). Healthcare professional oversight is crucial for responsible decision-making and preventing harm, particularly in oncology.

5 Informed consent transformation in the era of digital pathology

A patient-centered approach incorporates DP tools in the laboratory of pathology transforms patient-physician relations and stimulates conversation about using AI-based algorithms for both clinicians and patients leading to fully informed decision-making (7). The use of AI-based tools and their impact on patients' health, should be disclosed before decision-making concerning diagnostics and treatment. The question arises as to how much data pathologists should disclose about AI-powered tools during the diagnostic process and whether patients should be given the choice of using traditional or AHI-based pathology.

Three aspects of informed consent are raised using AI-based technologies: the patient's right to know about using AI tools for further decision-making, privacy concerns and legal protection of patients. In general, the patient's right to accept or reject medical diagnostics and treatment correlates with the duty of physicians to provide information. This remains true regarding the use of AI-powered tools for diagnostics and treatment options recommendations. Patients must understand of the novel approaches used and determine their alignment with each patient's values. Ploug argues that patients should be provided with all of the information needed for decision making in terms of how the AI systems use the data about the patient and their histological slides, the potential biases of the AI-based system, how the labor is distributed between AI and health care professionals, and what the performance of AI-tool itself and in AI-Human team approach.

AI can provide deep insights into patients’ data, which may exacerbate privacy concerns (48, 49). Privacy possesses binary ethical meaning: the obligation to uphold and as a right to be protected (10). Application of DP power for the complex analysis of medical, histological, and molecular features allows for stratification of patients according to their risks on various cancer predispositions, predicts the probability of various mutations affecting patients' prognosis and also support health care professionals in defining the appropriate management options for every patient (46). However, the flip side of the issue is patients' data access and reliability. Providing AI tools access to personal data can provoke lead to privacy breaches and undesirable use of sensitive data. Individuals have a right to privacy and protection against harm. This implies a right to contest the use of health and other personal data during AI-supported diagnostics (50). This requires patients to be informed about the types of personal data used in AI diagnostics (for instance clinical data, laboratory tests, scans, WSI, etc.).

On the other hand, health data can be of different quality—some information can be outdated, one-sided, incomplete or erroneous (51). In such cases, the use of AI-based diagnostics based on the use of inaccurate personal health data can be inaccurate and harmful. This raises additional concerns about data sources and quality and reliability. Thus, as data sensitivity and quality depend on the source of the information, patients should be provided with the opportunity to contest the use of personal health data in AI-supported diagnostics with regard to data sources and types. Alternatively, patients could be provided with the opportunity to check and verify the data before they are retrieved to AI-based tools for diagnostics and prediction.

Providing patients with the choice of using AI-supported diagnostics rather than traditional pathology dictates the duties of healthcare institutions and laboratories to purchase and implement such systems in practice to enable patients access to innovative technologies and increase the quality of diagnostic services provided. Besides primary and regular staff training is essential for responsible implementation and use of emerging AI-based technologies. Both developers and users of AI-based systems should provide all the information about the logic, benefits and limitations if a patient raises concerns about the AI advice. Another ethical obligation of healthcare providers is to train the employees to provide information about AI systems to patients for a better understanding of AI-supported data analysis and decisions. Thus, the informed consent process should incorporate the disclosure of using DP tools for diagnostics and treatment options advice, the benefits and limitations of technology, considerations of data sources used as well as data protective measures. The next element of the informed consent process is cognition and understanding of information relying on the patient`s competence and comprehension but also depending on the relevant information provided. Standardized and clear disclosure of information in plain language enhances understanding of the technologies used (9). Besides patient education and public disclosure of the technological advances empower all stakeholders and improve public acceptance of the innovative technologies.

Assessment of various stakeholders' perspectives on using AI-based tools in medical practice highlights the importance of human involvement in the AI system's decisions and strategies for guarding against errors (45). AI can be involved in the diagnostic process differently. Some systems are used for initial screening prior to diagnosis, and others are directly involved in the diagnostic process, providing the preliminary report that must be validated by pathologists. The distribution of duties between AI and pathologists can improve the quality of the diagnostics and the whole performance of AI-Human teams while at the same time providing healthcare professionals and opportunity to make their own diagnostic decisions, articulating the responsibility of pathologists for conclusions made. Such an approach is essential to maintain patient trust and confidence in partially automated solutions in DP. Thus, respecting an individual's right to fully informed consent, patients should be notified about the use of AI, the type of data utilized and their sources, potential biases of the AI-based system related to training sets and models, diagnostic performance of the tool itself and in complex with health care professional, variables used for AI decision making and possible alternative information must also be provided about the proportion of AI and pathologists' labor, to clarify the role of AI and humans in making a decision and defining the responsibility for pathology report.

Patient choice between digital and traditional pathology raises legal and ethical issues. Patients have rights to bodily integrity and autonomy, but samples processing procedures and the use of various diagnostic tools rely on professional standards, so healthcare providers have the right to apply the most appropriate tests for professional decision-making (45). Several questions need to be answered to know how to best integrate AI including the question of whether AHI is ethically and legally distinctive from traditional pathology determinations in terms of risks/benefits ratio, that patients should have various options of choice on the degree automation applied in their care. Even while the use of AI continues to develop patients must be informed about the use of AI-based diagnostic tools that may be a basis for further treatment decisions. In full appreciation of shared decision-making patients should have a right to refuse AI-guided treatment recommendations if patients distrust AI technologies (45). More information about the use of AHI solutions and their benefits should be provided to patients and the public in general.

6 Conclusion

The application of DP provides multiple benefits for both healthcare professionals and patients. The AHI approach empowers pathologists to deliver accurate diagnoses and assess predictive biomarkers for further personalized treatment of cancer patients and improving their outcomes. However, the ethical implementation of DP requires revising physician-patient relationships. Such transformation dictates the duty of healthcare institutions and laboratories to provide the relevant information about AHI-based tools to cancer patients and the community for building transparency, understanding and trust, respecting patients' autonomy, and empowering informed decision-making in oncology.

Author contributions

OS: Conceptualization, Funding acquisition, Writing – original draft, Writing – review & editing. OD: Writing – review & editing. OK: Writing – review & editing. MP: Writing – review & editing. NK: Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article.

OS is supported by the National Institutes of Health Fogarty International Center (Grant Award D43TW011506). The content of the manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Acknowledgments

The authors would like to thank Dr. Nanette Elster, Associate Professor of Neiswanger Institute for Bioethics and Health Policy, Loyola University Chicago Stritch School of Medicine, for her professional support and editing of the text of the paper. The authors are also grateful to Professor Emily Anderson (Neiswanger Institute for Bioethics and Health Policy, Loyola University Chicago Stritch School of Medicine) for her professional support in the field of Bioethics.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Baxi V, Edwards R, Montalto M, Saha S. Digital pathology and artificial intelligence in translational medicine and clinical practice. Mod Pathol. (2022) 35:23–32. doi: 10.1038/S41379-021-00919-2

PubMed Abstract | Crossref Full Text | Google Scholar

2. OECD iLibrary. Collective action for responsible AI in health. OECD Artificial Intelligence Papers. Available online at: https://www.oecd-ilibrary.org/science-and-technology/collective-action-for-responsible-ai-in-health_f2050177-en (accessed May 6, 2024).

3. Ahmed Z, Mohamed K, Zeeshan S, Dong XQ. Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database. (2020) 2020:baaa010. doi: 10.1093/DATABASE/BAAA010

PubMed Abstract | Crossref Full Text | Google Scholar

4. Kiran N, Sapna F, Kiran F, Kumar D, Raja F, Shiwlani S, et al. Digital pathology: transforming diagnosis in the digital age. Cureus. (2023) 15:e44620. doi: 10.7759/CUREUS.44620

PubMed Abstract | Crossref Full Text | Google Scholar

5. Jiang Y, Yang M, Wang S, Li X, Sun Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun (London, England). (2020) 40:154–66. doi: 10.1002/CAC2.12012

Crossref Full Text | Google Scholar

6. Nabi J. Artificial intelligence can augment global pathology initiatives. Lancet (London, England). (2018) 392:2351–2. doi: 10.1016/S0140-6736(18)32209-8

PubMed Abstract | Crossref Full Text | Google Scholar

7. Bera K, Schalper KA, Rimm DL, Velcheti V, Madabhushi A. Artificial intelligence in digital pathology—new tools for diagnosis and precision oncology. Nat Rev Clin Oncol. (2019) 16:703–15. doi: 10.1038/S41571-019-0252-Y

PubMed Abstract | Crossref Full Text | Google Scholar

8. Betmouni S. Diagnostic digital pathology implementation: learning from the digital health experience. Digital Health. (2021) 7:20552076211020240. doi: 10.1177/20552076211020240

PubMed Abstract | Crossref Full Text | Google Scholar

9. Beauchamp T, Childress J. Principles of biomedical ethics: marking its fortieth anniversary. Am J Bioeth. (2019) 19:9–12. doi: 10.1080/15265161.2019.1665402

PubMed Abstract | Crossref Full Text | Google Scholar

10. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. (2019) 1:389–99. doi: 10.1038/s42256-019-0088-2

Crossref Full Text | Google Scholar

11. Liu X, Glocker B, McCradden MM, Ghassemi M, Denniston AK, Oakden-Rayner L. The medical algorithmic audit. Lancet Digit Health. (2022) 4:e384–97. doi: 10.1016/S2589-7500(22)00003-6

PubMed Abstract | Crossref Full Text | Google Scholar

12. Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med (1982). (2020) 260:113172. doi: 10.1016/J.SOCSCIMED.2020.113172

Crossref Full Text | Google Scholar

13. Char DS, Shah NH, Magnus D. Implementing machine learning in health care—addressing ethical challenges. N Engl J Med. (2018) 378:981–3. doi: 10.1056/NEJMP1714229

PubMed Abstract | Crossref Full Text | Google Scholar

14. Sulaieva O, Falalyeyeva T, Kobyliak N, Pellicano R, Dudin O. Precision oncology: ethical challenges and justification. Minerva Med. (2022) 113:603–5. doi: 10.23736/S0026-4806.22.08063-6

PubMed Abstract | Crossref Full Text | Google Scholar

15. Ploug T, Holm S. The four dimensions of contestable AI diagnostics—a patient-centric approach to explainable AI. Artif Intell Med. (2020) 107:101901. doi: 10.1016/J.ARTMED.2020.101901

PubMed Abstract | Crossref Full Text | Google Scholar

16. Reza Tizhoosh H, Pantanowitz L. Artificial intelligence and digital pathology: challenges and opportunities. J Pathol Inform. (2018) 9:38. doi: 10.4103/JPI.JPI_53_18

PubMed Abstract | Crossref Full Text | Google Scholar

17. Thorstenson S, Molin J, Lundström C. Implementation of large-scale routine diagnostics using whole slide imaging in Sweden: digital pathology experiences 2006–2013. J Pathol Inform. (2014) 5:14. doi: 10.4103/2153-3539.129452

PubMed Abstract | Crossref Full Text | Google Scholar

18. Evans AJ, Salama ME, Henricks WH, Pantanowitz L. Implementation of whole slide imaging for clinical purposes: issues to consider from the perspective of early adopters. Arch Pathol Lab Med. (2017) 141:944–59. doi: 10.5858/ARPA.2016-0074-OA

PubMed Abstract | Crossref Full Text | Google Scholar

19. Laohawetwanit T, Gonzalez RS, Bychkov A. Learning at a distance: results of an international survey on the adoption of virtual conferences and whole slide imaging by pathologists. J Clin Pathol. (2023):jcp-2023-208912. doi: 10.1136/JCP-2023-208912

PubMed Abstract | Crossref Full Text | Google Scholar

20. Holzinger A, Malle B, Kieseberg P, Roth PM, Müller H, Reihs R, Zatloukal K (2017). Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. Available online at: https://arxiv.org/abs/1712.06657v1 (accessed May 6, 2024).

21. Arasu VA, Habel LA, Achacoso NS, Buist DSM, Cord JB, Esserman LJ, et al. Comparison of mammography AI algorithms with a clinical risk model for 5-year breast cancer risk prediction: an observational study. Radiology. (2023) 307:e222733. doi: 10.1148/RADIOL.222733

PubMed Abstract | Crossref Full Text | Google Scholar

22. Rashidi HH, Tran NK, Betts EV, Howell LP, Green R. Artificial intelligence and machine learning in pathology: the present landscape of supervised methods. Acad Pathol. (2019) 6:2374289519873088. doi: 10.1177/2374289519873088

PubMed Abstract | Crossref Full Text | Google Scholar

23. Jarrahi MH, Davoudi V, Haeri M. The key to an effective AI-powered digital pathology: establishing a symbiotic workflow between pathologists and machine. J Pathol Inform. (2022) 13:100156. doi: 10.1016/J.JPI.2022.100156

PubMed Abstract | Crossref Full Text | Google Scholar

24. Acs B, Rantalainen M, Hartman J. Artificial intelligence as the next step towards precision pathology. J Intern Med. (2020) 288:62–81. doi: 10.1111/JOIM.13030

PubMed Abstract | Crossref Full Text | Google Scholar

25. Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond. Int J Inf Fusion. (2022) 77:29–52. doi: 10.1016/J.INFFUS.2021.07.016

Crossref Full Text | Google Scholar

26. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. (2019) 17:195. doi: 10.1186/S12916-019-1426-2

PubMed Abstract | Crossref Full Text | Google Scholar

27. Komura D, Ishikawa S. Machine learning methods for histopathological image analysis. Comput Struct Biotechnol J. (2018) 16:34–42. doi: 10.1016/J.CSBJ.2018.01.001

PubMed Abstract | Crossref Full Text | Google Scholar

28. McGenity C, Treanor D. Guidelines for clinical trials using artificial intelligence—SPIRIT-AI and CONSORT-AI†. J Pathol. (2021) 253:14–6. doi: 10.1002/PATH.5565

PubMed Abstract | Crossref Full Text | Google Scholar

29. Human Rights Watch. The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems. Available online at: https://www.hrw.org/news/2018/07/03/toronto-declaration-protecting-rights-equality-and-non-discrimination-machine (accessed November 21, 2023).

30. Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, et al. AI4People-an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. (2018) 28:689–707. doi: 10.1007/S11023-018-9482-5

Crossref Full Text | Google Scholar

31. Sage Advice Australia. The Ethics of Code Developing AI for Business with Five Core Principles. Available online at: https://www.sage.com/en-au/blog/the-ethics-of-code-developing-ai-for-business-with-five-core-principles-2/ (accessed November 24, 2023).

32. IEEE SA. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Available online at: https://standards.ieee.org/industry-connections/ec/autonomous-systems/ (accessed November 24, 2023).

33. García-Rojo M, De Mena D, Muriel-Cueto P, Atienza-Cuevas L, Domínguez-Gómez M, Bueno G. New European union regulations related to whole slide image scanners and image analysis software. J Pathol Inform. (2019) 10:2. doi: 10.4103/JPI.JPI_33_18

Crossref Full Text | Google Scholar

34. Kearney SJ, Lowe A, Lennerz JK, Parwani A, Bui MM, Wack K, et al. Bridging the gap: the critical role of regulatory affairs and clinical affairs in the total product life cycle of pathology imaging devices and software. Front Med (Lausanne). (2021) 8:765385. doi: 10.3389/FMED.2021.765385

PubMed Abstract | Crossref Full Text | Google Scholar

35. HHS.gov. HIPAA for Professionals. Available online at: https://www.hhs.gov/hipaa/for-professionals/index.html (accessed November 24, 2023).

36. Official Legal Text. General Data Protection Regulation (GDPR). Available online at: https://gdpr-info.eu/ (accessed November 24, 2023).

37. EU Artificial Intelligence Act. Up-to-date developments and analyses of the EU AI Act. Available online at: https://artificialintelligenceact.eu/ (accessed May 6, 2024).

38. European Commission. Political agreement on European Health Data Space. Available online at: https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1346 (accessed May 6, 2024).

39. Holub P, Müller H, Bíl T, Pireddu L, Plass M, Prasser F, et al. Privacy risks of whole-slide image sharing in digital pathology. Nat Commun. (2023) 14:2577. doi: 10.1038/S41467-023-37991-Y

PubMed Abstract | Crossref Full Text | Google Scholar

40. Holub P, Kohlmayer F, Prasser F, Mayrhofer MT, Schlünder I, Martin GM, et al. Enhancing reuse of data and biological material in medical research: from FAIR to FAIR-health. Biopreserv Biobank. (2018) 16:97–105. doi: 10.1089/BIO.2017.0110

PubMed Abstract | Crossref Full Text | Google Scholar

41. Goodman B, Flaxman S. European union regulations on algorithmic decision making and a “right to explanation”. AI Magazine. (2017) 38:50–7. doi: 10.1609/AIMAG.V38I3.2741

Crossref Full Text | Google Scholar

42. Pell R, Oien K, Robinson M, Pitman H, Rajpoot N, Rittscher J, et al. The use of digital pathology and image analysis in clinical trials. J Pathol Clin Res. (2019) 5:81–90. doi: 10.1002/CJP2.127

PubMed Abstract | Crossref Full Text | Google Scholar

43. Federal Register. Medical Devices; Quality System Regulation Amendments. Available online at: https://www.federalregister.gov/documents/2022/02/23/2022-03227/medical-devices-quality-system-regulation-amendments (accessed November 24, 2023).

44. European Parliament, Directorate-General for Parliamentary Research Services, Lekadir K, Quaglio G, Tselioudis Garmendia A, Gallin C. Artificial intelligence in healthcare: applications, risks, and ethical and societal impacts. European Parliament (2022). doi: 10.2861/568473

45. Redrup Hill E, Mitchell C, Brigden T, Hall A. Ethical and legal considerations influencing human involvement in the implementation of artificial intelligence in a clinical pathway: a multi-stakeholder perspective. Front Digit Health. (2023) 5:1139210. doi: 10.3389/FDGTH.2023.1139210

PubMed Abstract | Crossref Full Text | Google Scholar

46. Dudin O, Mintser O, Sulaieva O. Штучний ІНТЕЛЕКТ ТА ПАТОЛОГІЯ НАСТУПНОГО ПОКОЛІННЯ: шЛЯХ ДО ПЕРСОНАЛІЗОВАНОЇ МЕДИЦИНИ. Proc Shevchenko Sci Soc Med Sci. (2021) 65:68–87. doi: 10.25040/NTSH2021.02.07

Crossref Full Text | Google Scholar

47. Mohlman JS, Leventhal SD, Hansen T, Kohan J, Pascucci V, Salama ME. Improving augmented human intelligence to distinguish burkitt lymphoma from diffuse large B-cell lymphoma cases. Am J Clin Pathol. (2020) 153:743–59. doi: 10.1093/AJCP/AQAA001

PubMed Abstract | Crossref Full Text | Google Scholar

48. Cheng X, Lin X, Shen XL, Zarifis A, Mou J. The dark sides of AI. Electron Markets. (2022) 32:11–5. doi: 10.1007/S12525-022-00531-5

Crossref Full Text | Google Scholar

49. Grewal D, Guha A, Satornino CB, Schweiger EB. Artificial intelligence: the light and the darkness. J Bus Res. (2021) 136:229–36. doi: 10.1016/J.JBUSRES.2021.07.043

Crossref Full Text | Google Scholar

50. Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. (2022) 9:862322. doi: 10.3389/FSURG.2022.862322

PubMed Abstract | Crossref Full Text | Google Scholar

51. Zhang J, Zhang Z. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak. (2023) 23:7. doi: 10.1186/S12911-023-02103-9

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: digital pathology, Augmented Human Intelligence, informed consent, physician-patient relationship, bioethics

Citation: Sulaieva O, Dudin O, Koshyk O, Panko M and Kobyliak N (2024) Digital pathology implementation in cancer diagnostics: towards informed decision-making. Front. Digit. Health 6:1358305. doi: 10.3389/fdgth.2024.1358305

Received: 19 December 2023; Accepted: 16 May 2024;
Published: 30 May 2024.

Edited by:

Andrea Barucci, National Research Council (CNR), Italy

Reviewed by:

Ilaria Romagnuolo, Careggi University Hospital, Italy
Maria Carmela Leo, Azienda Ospedaliera Universitaria Meyer IRCCS—Firenze, Italy
Valentina Colcelli, National Research Council (CNR), Italy

© 2024 Sulaieva, Dudin, Koshyk, Panko and Kobyliak. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Oksana Sulaieva, by5zdWxhaWV2YUBjc2QuY29tLnVh

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.