- Legal Medicine and Toxicology, Department of Cardiac, Thoracic, Vascular Sciences and Public Health, University of Padua, Padua, Italy
The adoption of advanced artificial intelligence (AI) systems in healthcare is transforming the healthcare-delivery landscape. Artificial intelligence may enhance patient safety and improve healthcare outcomes, but it presents notable ethical and legal dilemmas. Moreover, as AI streamlines the analysis of the multitude of factors relevant to malpractice claims, including informed consent, adherence to standards of care, and causation, the evaluation of professional liability might also benefit from its use. Beginning with an analysis of the basic steps in assessing professional liability, this article examines the potential new medical-legal issues that an expert witness may encounter when analyzing malpractice cases and the potential integration of AI in this context. These changes related to the use of integrated AI, will necessitate efforts on the part of judges, experts, and clinicians, and may require new legislative regulations. A new expert witness will be likely necessary in the evaluation of professional liability cases. On the one hand, artificial intelligence will support the expert witness; however, on the other hand, it will introduce specific elements into the activities of healthcare workers. These elements will necessitate an expert witness with a specialized cultural background. Examining the steps of professional liability assessment indicates that the likely path for AI in legal medicine involves its role as a collaborative and integrated tool. The combination of AI with human judgment in these assessments can enhance comprehensiveness and fairness. However, it is imperative to adopt a cautious and balanced approach to prevent complete automation in this field.
1 Introduction
Artificial intelligence (AI) has multiple applications in the medical-surgical context, both in scientific research and clinical care. Artificial intelligence has been and is used, for example, for the following purposes: (1) the diagnostic interpretation of images in ophthalmology (1), dermatology (2), gastroenterology (3), anatomic pathology (4), and radiology (5); (2) the interpretation of signals derived from electronic devices (6) and molecular data (genetics, tumor markers, protein structures, and medical records with medical history collection) (7); (3) the development of vaccines and drugs (8); (4) the prediction of access volume for healthcare facilities, the risk of complications in hospitalized patients, potential hospitalization, potentially serious complications, and prognosis (9–11); (5) the more precise classification of diseases (12); and (6) robotic surgery (13, 14).
Although AI promises to improve the quality of care and patient safety, potential adverse events attributable to errors can still occur (15). The occurrence of errors in the context of healthcare professionals using AI or even, in the future, adverse events attributable to autonomous AI applied in the healthcare field will need to be evaluated differently from today and likely require judges, lawyers, and expert witnesses to have new skills (16, 17).
Although the use of AI in the forensic medical field for assessing professional responsibility has not been developed, AI’s application has already been expanded within the legal context, and discussions about its potential use as evidence in civil and criminal cases have taken place (18–20).
In view of the spread of medical litigation and the information presented above, it is worthwhile to delve into the current steps involved in the assessment of medical-professional liability cases and examine how AI can be integrated into these steps, as is already happening in specific contexts. This analysis will explore both how AI impacts the behavior of healthcare professionals and how it can assist expert witnesses in evaluating such cases. The authors will, therefore, first explain the differences between autonomous AI and AI that is integrated with the activities of healthcare professionals, describe the steps required today to assess a case involving professional liability, and then analyze how AI influences the process of evaluating the individual phases of professional-liability analysis.
2 Autonomous AI and working together with the AI
Autonomous AI and integrated AI in clinical healthcare represent distinct approaches to the use of AI in medical settings (21). The term “autonomous AI” refers to AI systems that can operate independently and make decisions without human intervention. On the other hand, integrated AI plays a supportive role by combining AI insights with human expertise (22). Currently, autonomous AI is not sufficiently advanced or trusted for full clinical application, while integrated AI is gaining increasing acceptance (23). For example, AI systems can now analyze medical images to suggest potential diagnoses (24) but physicians still review the AI output to make final diagnostic and treatment decisions (25).
In considering the concepts of “Human in the loop” and “Human out of the loop,” which describe the processes enabling the level of human involvement in decision-making and collaboration between humans and machines (26, 27), it becomes evident that enhancing human interaction and control is paramount for the responsible and effective use of AI. The Human-in-the-loop approach allows for the modification of machine learning (ML) methods’ results by incorporating human skills and expertise, enabling human interaction at every step of the ML process (28). These processes are widely acknowledged for improving medical workflows and enhancing patient safety. For instance, in the context of coma prognosis, a model has been studied that operates through a response loop. In this loop, the human’s intention is derived by collecting biological signals and context data, and the decision is then interpreted into a human-recognizable system action, completing the loop (29).
The specific design of interfaces emerges as a key factor in achieving the goal of enhancing human interaction and control. Tailoring user interfaces to facilitate transparent communication between humans and AI systems is crucial for better understanding and interpretation of AI-generated insights (30). Intuitive and user-friendly interfaces could enhance the understanding of a logarithmic decision with a high level of causal understanding, empowering individuals to effectively engage with and influence the decision-making process (31). Moreover, the incorporation of feedback mechanisms and alerts in interfaces can keep humans informed about AI-generated decisions, enabling them to intervene when necessary. This iterative feedback loop promotes a collaborative relationship between humans and AI, leveraging the strengths of both for optimal outcomes (32).
Considering the above, it seems unlikely that AI will completely replace human clinicians in the near term (33). While AI is becoming more capable, the practice of medicine involves intricate reasoning, interpersonal skills, and ethical considerations that AI currently lacks (34). Nevertheless, AI will play an increasingly integral role in clinical practice as an augmenting technology (35). The most probable trajectory involves physicians leveraging AI as a collaborative tool with which to enhance their abilities (36, 37).
This review will focus on the medico-legal aspects of integrated AI in clinical healthcare.
3 Description of the literature search
In June 2023, one of the authors (CT) conducted a comprehensive literature review by searching MEDLINE/PubMed. Temporal constraints were applied, limiting the scope to articles published within the last 5 years (2018–2023). Only English publications in full text were deemed suitable for inclusion in the study. The search employed the following phrases: “artificial intelligence medical malpractice,” “artificial intelligence legal medicine,” “artificial intelligence professional liability assessment,” and “artificial intelligence informed consent.”
The articles were meticulously reviewed by CC and the other authors, with a specific focus on elements pertaining to legal medicine and professional liability assessment. Additionally, relevant articles cited within the analyzed papers were taken into consideration, particularly those addressing key aspects of professional liability assessment using AI, including informed consent, damage objectivation, conduct evaluation, and causal relationship assessment.
4 Medical malpractice liability assessment
While there may be variations in civil and tort law across countries (38), there is a consensus regarding assessing medical malpractice and compensation for damages. This consensus establishes the presence of proven civil liability through an error that is causally linked to the patient suffering harm (5, 39). First, to assess professional liability, it is necessary to demonstrate the existence of patient harm due to medical malpractice. This harm can manifest as the transition from a healthy condition to a disease, result in death, or exacerbate a pre-existing condition (39). It represents a physical or psychological injury suffered by the plaintiff under tort and/or civil law (38, 40).
Demonstrating harm requires collecting clinical documentation, conducting a direct examination of the patient, and investigating the clinical situation through additional diagnostic tests (38). The reconstruction of the physiopathological course, which encompasses the actual sequence of events that occurred, is part of this phase. When evaluating a professional-liability case, it is also required to analyze the collection of informed consent from the patient.
Errors on the part of the healthcare professional are then identified to establish professional-malpractice liability. An error occurs when the physician deviates from the standard of care. The standard of care is typically determined by comparing the physician’s conduct to that of a competent physician with a similar level of specialization and available resources (5, 15). Guidelines, consensus documents, and evidence-based publications should guide the actions of a competent physician.
The last step in the analysis of a professional-liability case is the evaluation of the causal link between the error on the part of the physician and the event (damage) involving the patient (38).
4.1 Damage identification, reconstruction of physiopathological pathways, and AI
The initial stage of assessing professional-liability cases involves the objective determination of the harm incurred and the subsequent reconstruction of the pathophysiological events that transpired. Artificial intelligence possesses the ability to thoroughly analyze extensive datasets derived from medical records and scientific literature, facilitating the identification of pathophysiological patterns that may elude human observation. It can also analyze all the contributing factors leading to the causation of damage.
The objectification of damage can be carried out by employing AI, with the specific approach varying depending on the medical specialization relevant to the professional-liability case at hand. One of the techniques for objectifying damage includes the application of AI in diagnostic classification, as previously mentioned. In other fields, such as the psychiatric field, AI’s contribution could be more limited, as dictated by the subject, in terms of the objectivization of the damage.
4.2 Informed consent and AI
Valid informed consent from patients is crucial to healthcare professionals carrying out medical treatments legally and ethically (41). For consent to be genuinely informed and competent, the patient must be able to willingly receive and comprehend the relevant information; process the details; evaluate the situation and consequences; understand the benefits, risks, and alternatives; and communicate their decision (42). Thus, the information presented to patients to obtain valid consent must be clear, comprehensible, tailored to their level of understanding, and provided using language suited to their background (43).
Performing any medical procedure without acquiring valid consent is considered unethical and illegal in many places, even if no harm occurs, potentially leading to malpractice lawsuits and liability (44). When a patient is harmed, a lack of informed consent further weakens the physician’s legal position (45). Information is based on communication with patients, which may be influenced by physiological and pathological conditions. Age, diseases, and medications may alter the capacities of patients. The elderly and children are populations with specific needs regarding informed consent. Truly informed consent is particularly difficult in elderly patients because of their physical conditions (46), their medical conditions (47), the effects of medications (48), as well as their attitudes of passive acceptance rather than active involvement in their care (49).
Although the responsibility for informed consent primarily lies with the healthcare professional who frames the informed consent within the doctor-patient relationship, the use of AI systems in healthcare also presents challenges related to informed consent, particularly concerning vulnerable populations.
The accuracy of information may be inadvertently influenced by AI if the system provides erroneous recommendations that lead to harm for patients. This can result from biases in the representativeness of data used to train an algorithm leading to poor performances for certain patient populations (50). This could suggest the necessity of tailoring informed consent differently for these groups. Additional factors contributing to this problem include poor design choices and healthcare professionals’ failure to correctly interpret the AI system’s information (14). Informed consent may also be compromised due to the complexity of advanced statistical and machine-learning techniques, which are not easily explained to patients. The inner workings of the AI system can function as a “black box,” making it difficult to precisely describe its operations (51). Patients, especially those with specific difficulties or unrealistic expectations regarding the accuracy and objectivity of AI, may give consent that is not fully conscious and unambiguous.
Adhering to the principle of transparency, any known biases or limitations inherent in healthcare-based AI systems should be clearly communicated (52). However, it should also be questioned whether or not physician would be able to assess whether the AI system has been trained on a representative dataset of a particular patient population (53).
Lastly, the information provided by AI may shift the responsibility for any adverse events onto patients (16).
In summary, when evaluating the medical-legal aspects of informed consent, informational deficiencies (Table 1) may be attributed to the following:
(1) Healthcare professionals who fail to comprehend the information provided by the AI and do not communicate the limitations of the AI to the patient or do not consider the patient’s limited understanding for physiological or pathological reasons.
(2) Patients who, even after being adequately informed, underestimate the recommendations provided by the AI system, for example, in the monitoring of a chronic condition.
(3) Artificial intelligence systems that have not been adequately trained by their developers or are affected by biases.
4.3 Analysis of the conduct of the physician and AI
A physician’s conduct is deemed appropriate when it aligns with the standard of care expected from a skilled physician practicing within the same medical specialty while making use of the available resources (15). The examination of a physician’s actions in a malpractice case by employing AI, with the capacity to analyze extensive medical data more efficiently and consistently than humans (54), will encompass an assessment of the various types of errors that may occur when a physician integrates AI into their practice (Table 2) (17, 40). Before specifically discussing the various types of errors that may be encountered when analyzing the conduct of a healthcare professional using artificial intelligence systems, it is necessary to clarify the concept of bias in the field of artificial intelligence and distinguish it from error. Biases can affect every stage of AI model development, including data collection (e.g., generation bias), data annotation (e.g., missing data bias), model development (e.g., training data bias), model deployment (e.g., user interaction bias), model evaluation (e.g., statistical bias) (55). The identification of sources of bias in AI algorithms is often impossible, so that a legal framework has been advocated that balances interests, responsibilities and liability risks among stakeholders, and ensures the reduction of biases in the design process before the release on the market (56).
4.3.1 Error by healthcare professionals independent of AI’s role
This type of error is classically associated with practices that are discordant with evidence-based medicine and unrelated to AI (57). They are the hypothetical responsibility of the physician.
4.3.2 Incorrect AI recommendation
Inadequate training data or poor design choices with errors, can suggest incorrect conduct on the part of the physician, with harmful consequences (16, 58). In the clinical field, data quality is more important than the algorithm and often represents the main cause of incorrect indications (59). Insufficient data or overly complex algorithms can lead to overfitting, in which predictions are valid for a dataset but may prove unreliable given additional data (16, 60, 61). The demand for extensive data, often referred to as “data hungriness” (62), poses medico-legal challenges, as single institutions may lack sufficient data for reliable predictions (16). Finally, data cleansing can enhance data usability in the context of intelligence, but it must be implemented carefully to avoid introducing another source of errors (59, 63). This is the hypothetical responsibility of trainers and programmers, with potential shared responsibility on the part of the purchasing company, considering the limitations in the assessment related to the concepts of bias and error.
4.3.3 Failure of the device
Hardware failures may contribute to errors (16, 58). They are the hypothetical responsibility of producers, with potential shared responsibility on the part of the purchasing company.
4.3.4 Failure to utilize AI when possible and necessary
Misconduct arising from the failure to use AI when necessary, may be attributed to inadequate training in AI utilization on the part of the worker. Insufficient training can be caused by either the worker or the healthcare facility where they are employed. The choice of the most appropriate AI technology for the patient’s situation can represent a further source of error. The liability for such wrongful conduct will depend on the prevailing legislation in a given country and may require mandatory updates and training (64). It is the hypothetical responsibility of the physician and/or healthcare facility.
4.3.5 Failure when using AI and interpreting the results of AI
Errors in data input and interpretation are another potential source of error. A physician could improperly evaluate the results provided by AI without considering the possibility of errors. Automation bias occurs when the physician passively accepts AI outputs that are wrong due to an operating error or training on wrong data (16, 17). Also, in these cases, a contributing factor may be the training of the worker. This is the hypothetical responsibility of the physician and/or healthcare facility.
Using AI when evaluating such conduct certainly allows the analysis of extensive medical data and guidelines more efficiently and consistently than humans would, but AI lacks nuanced human judgment (65). Medicine involves complex decision-making with imperfect information. While AI can identify patterns in data, it struggles to account for unique patient circumstances and subtle factors that justify deviations from guidelines (66). Consequently, AI assessments could over-rely on retrospective guidelines rather than evaluating the real-time context physicians face. This could lead to unfair conclusions about reasonable conduct. Additionally, biased data and algorithms could negatively impact the analysis of physician actions involving marginalized patient populations (67).
4.4 Causal relationship and AI
Medical malpractice occurs when a healthcare provider deviates from the accepted standard of care and causes harm to a patient. Establishing causation is essential in medical malpractice cases to show that the provider’s negligence directly caused the patient’s injury. However, determining causation can be complex, especially with multiple factors being involved in treatment and health outcomes.
Artificial intelligence and machine-learning tools have the potential to help establish causal relationships in medical-malpractice cases. These tools can analyze large datasets derived from medical records, treatment guidelines, and the scientific literature to identify statistically significant associations between interventions and outcomes (68). For example, an AI system could review thousands of cases involving a particular drug or procedure to determine whether it is linked to higher complication rates after controlling for other clinical factors (69). However, AI has limitations when determining causation. First, AI systems can operate as “black boxes,” making it a significant technical challenge to provide an explanation for their predictions (7, 70). When utilizing AI for assessing causal correlations, it is essential to consider the model’s accuracy in providing accurate predictions. This accuracy assessment should encompass the availability of reproducible studies (7).
Correlation does not necessarily imply causation, so simply because two factors are associated does not mean one caused the other. The evaluation of causation must be contextualized on a case-by-case basis through the determination of the medico-legal criteria, including universal and statistical laws, as well as the criterion of rational credibility (38). On one hand, AI could identify the most consistent scientific laws and guidelines; on the other hand, rational credibility relies on the judgment of a professional, who adapts scientific laws to clinical reality by considering all the variables involved. This becomes even more complex in the analysis of omissive medical errors, in which it is necessary to reconstruct the hypothetical alternative clinical course to define the causation of the injury. Moreover, the most complex professional-liability cases often involve the opinions of different physicians with expertise in the field, bringing together this expertise to achieve the highest degree of credibility. Artificial intelligence systems may find spurious correlations that do not reflect true causal mechanisms (71).
Human expertise is still required to contextualize AI insights and make sound judgments on causation that consider complex real-world clinical scenarios.
Overall, AI can be a useful tool for finding patterns in data to provide evidence for or against causation arguments in medical-malpractice cases (67). However, human analysis and discretion are still essential to determine whether negligence was the most probable cause of a patient’s injury (64). Artificial intelligence alone cannot definitively prove causation but should complement human evaluation (72). Handled responsibly, these technologies could improve the accuracy of determinations regarding liability and standards of care.
5 Time for a new expert witness in liability cases
The integration of AI into assessing professional liability is likely to necessitate a new type of expert witness. The qualifications to evaluate AI systems, the analysis of a process, the speed and scale of case analysis, communication skills, and perceived objectivity represent the differences between a traditional expert witness and an expert witness leveraging AI in medical-malpractice analysis. A new technical consultant must also be familiar with emerging legislation pertaining to artificial intelligence, especially in the context of healthcare applications. For instance, the European Union’s Regulation on In vitro Diagnostic Medical Devices (IVDR) exemplifies legislative efforts to regulate artificial intelligence in healthcare, with potential professional responsibility implications. This regulation introduces AI-based analysis software in medical diagnostics for decision support. However, it mandates that developers design these tools in accordance with the state of the art, a concept encompassing safety, verification, and validation of the underlying data. Essentially, the legislation requires explainable and understandable artificial intelligence systems to enable healthcare professionals to make responsible clinical decisions as required by law (73).
As machine-learning algorithms can be applied to analyze conduct against standards of care, the validity, biases, and limitations of these models will require expert examination (74). Traditional expert witnesses may lack the skills needed to critically assess AI systems. Knowledge gaps regarding data preprocessing, model selection, training approaches, and algorithmic bias checks could hamper the evaluation of AI reliability and fairness (75). Without AI fluency, witnesses may struggle to explain inherent uncertainties or be vulnerable to problematic assumptions (76). The “new” expert witness could review and compare many more cases using AI automation; when backed by data, they could be perceived as more impartial, and fact based.
Ideal candidates should combine domain knowledge in their professional field with AI expertise to translate technical details for legal professionals (15). They can opine on the comprehensibility, generalizability, and potential discrimination of AI systems for liability analysis (65). Through testimony, they can provide an essential perspective on the appropriate role of AI vs. human judgment.
In summary, the introduction of AI to assist in determining professional liability necessitates impartial experts who can credibly evaluate the technology. Developing a cadre of qualified AI expert witnesses will be key to ensuring due process as algorithmic tools are adopted in legal realms. The optimal approach integrates the best of both worlds in the form of AI augmentation without full automation. However, the shifting balance toward data-driven analysis marks a potential evolution in expert practice.
6 Conclusion
The integration of AI into the assessment of professional liability represents a major evolution in legal medicine, albeit one with growing pains. As described, AI streamlines the analysis of the multitude of factors relevant to malpractice claims, including informed consent, standard-of-care adherence, and causation. However, solely relying on AI autonomy at this stage poses concerning risks regarding biased algorithms, a lack of nuance, and overall credibility. Therefore, the prudent path forward entails integrating AI tools with human experts’ input. The presence of bias is of significant relevance in determining professional liability, which is influenced by the legal framework. In fault-based frameworks, the focus is usually on demonstrating negligence or misconduct, consistent with the conventional view of medical malpractice. In these instances, the existence of bias can impede the determination of culpability. Conversely, no-fault liability models prioritize compensating victims without requiring proof of fault. Per some scholars, this methodology may be imperative if AI is deployed in healthcare (77).
Our analysis suggests that it’s time to consider a new expert witness for liability cases. On the one hand, artificial intelligence will support the expert witness; however, on the other hand, it will introduce specific elements into the activities of healthcare workers. These elements will necessitate an expert witness with a specialized cultural background.
By combining AI’s high-volume data processing with human judgment, oversight, and explanation, professional liability assessments could become more comprehensive and equitable. Of course, the ideal balance of responsibilities between humans and machines remains unclear. We must take care to ensure AI amplification, not automation, in this high-stakes domain. If undertaken judiciously, AI integration offers legal medicine an unprecedented opportunity to improve the consistency, efficiency, and fairness of professional-liability determinations.
Author contributions
CT: Conceptualization, Writing – original draft, Writing – review and editing. CC: Supervision, Writing – review and editing. LF: Writing – review and editing. AC: Writing – review and editing.
Funding
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Chen X, Shen Y, Jiang Y, Cheng M, Lei Y, Li B, et al. Predicting vault and size of posterior chamber phakic intraocular lens using sulcus to sulcus-optimized artificial intelligence technology. Am J Ophthalmol. (2023) 255:87–97. doi: 10.1016/j.ajo.2023.06.024
2. Omiye JA, Gui H, Daneshjou R, Cai ZR, Muralidharan V. Principles, applications, and future of artificial intelligence in dermatology. Front Med. (2023) 10:1278232. doi: 10.3389/fmed.2023.1278232
3. Quek SX, Lee JW, Feng Z, Soh MM, Tokano M, Guan YK, et al. Comparing artificial intelligence to humans for endoscopic diagnosis of gastric neoplasia: an external validation study. J Gastroenterol Hepatol. (2023) 38:1587–91. doi: 10.1111/jgh.16274
4. Yousif M, Pantanowitz L. Artificial intelligence-enabled gastric cancer interpretations: are we there yet? Surg Pathol Clin. (2023) 16:673–86. doi: 10.1016/j.path.2023.05.005
5. Hedderich DM, Weisstanner C, Van Cauter S, Federau C, Edjlali M, Radbruch A, et al. Artificial intelligence tools in clinical neuroradiology: essential medico-legal aspects. Neuroradiology. (2023) 65:1091–9. doi: 10.1007/s00234-023-03152-7
6. Khurshid S. Clinical perspectives on the adoption of the artificial intelligence-enabled electrocardiogram. J Electrocardiol. (2023) 81:142–5. doi: 10.1016/j.jelectrocard.2023.08.014
7. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. (2022) 28:31–8. doi: 10.1038/s41591-021-01614-0
8. Raza A, Chohan TA, Buabeid M, Arafa EA, Chohan TA, Fatima B, et al. Deep learning in drug discovery: a futuristic modality to materialize the large datasets for cheminformatics. J Biomol Struct Dyn. (2023) 41:9177–92. doi: 10.1080/07391102.2022.2136244
9. Yu KH, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Qual Saf. (2019) 28:238–41. doi: 10.1136/bmjqs-2018-008551
10. Ngiam KY, Khor IW. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. (2019) 20:e262–73. doi: 10.1016/S1470-2045(19)30149-4
11. Wang Z, Zhang L, Chao Y, Xu M, Geng X, Hu X. Development of a machine learning model for predicting 28-day mortality of septic patients with atrial fibrillation. Shock. (2023) 59:400–8. doi: 10.1097/SHK.0000000000002078
12. Azizi S, Bayat S, Yan P, Tahmasebi A, Nir G, Kwak JT, et al. Detection and grading of prostate cancer using temporal enhanced ultrasound: combining deep neural networks and tissue mimicking simulations. Int J Comput Assist Radiol Surg. (2017) 12:1293–305. doi: 10.1007/s11548-017-1627-0
13. O’Sullivan S, Leonard S, Holzinger A, Allen C, Battaglia F, Nevejans N, et al. Operational framework and training standard requirements for AI-empowered robotic surgery. Int J Med Robot. (2020) 16:1–13. doi: 10.1002/rcs.2020
14. Morris MX, Song EY, Rajesh A, Asaad M, Phillips BT. Ethical, legal, and financial considerations of artificial intelligence in surgery. Am Surg. (2023) 89:55–60. doi: 10.1177/00031348221117042
15. Price WN II, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. (2019) 322:1765–6. doi: 10.1001/jama.2019.15064
16. Oliva A, Grassi S, Vetrugno G, Rossi R, Della Morte G, Pinchi V, et al. Management of medico-legal risks in digital health era: a scoping review. Front Med. (2022) 8:821756. doi: 10.3389/fmed.2021.821756
17. Naik N, Hameed BM, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. (2022) 9:862322. doi: 10.3389/fsurg.2022.862322
18. Grimm P, Grossman MR, Cormack GV. Artificial intelligence as evidence. Northwestern J Technol Intellect Property. (2021) 19:8–106.
19. Gans-Combe C. Automated justice: issues, benefits and risks in the use of artificial intelligence and its algorithms in access to justice and law enforcement, 2022. In: O’Mathúna D, Iphofen R editors. Ethics, Integrity and Policymaking: The Value of the Case Study. Cham: Springer (2022). doi: 10.1007/978-3-031-15746-2_14
20. Jacob de Menezes-Neto E, Clementino MB. Using deep learning to predict outcomes of legal appeals better than human experts: a study with data from Brazilian federal courts. PLoS One. (2022) 17:e0272287. doi: 10.1371/journal.pone.0272287
21. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. (2017) 2:230–43. doi: 10.1136/svn-2017-000101
22. Abramoff MD, Whitestone N, Patnaik JL, Rich E, Ahmed M, Husain L, et al. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. NPJ Digit Med. (2023) 6:184. doi: 10.1038/s41746-023-00931-7
23. Sendak MP, Ratliff W, Sarro D, Aldana R, Futoma J, Michaleas P, et al. Real-world integration of a sepsis deep learning technology into routine clinical care: implementation study. JMIR Med Inform. (2020) 8:e15182. doi: 10.2196/15182
24. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al. Deep learning techniques in PET/CT imaging: a comprehensive review from sinogram to image space. Comput Methods Programs Biomed. (2023) 243:107880. doi: 10.1016/j.cmpb.2023.107880
25. Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. (2021) 8:e188–94. doi: 10.7861/fhj.2021-0095
26. Mosqueira-Rey E, Hernández-Pereira E, Alonso-Ríos D, Bobes-Bascarán J, Fernández-Leal Á. Human-in-the-loop machine learning: a state of the art. Artif Intell Rev. (2023) 56:3005–54. doi: 10.1007/s10462-022-10246-w
27. Chandler C, Foltz PW, Elvevåg B. Improving the applicability of AI for psychiatric applications through human-in-the-loop methodologies. Schizophr Bull. (2022) 48:949–57. doi: 10.1093/schbul/sbac038
28. Maadi M, Akbarzadeh Khorshidi H, Aickelin UA. Review on human-AI interaction in machine learning and insights for medical applications. Int J Environ Res Public Health. (2021) 18:2121. doi: 10.3390/ijerph18042121
29. Ganesan A, Paul A, Nagabushnam G, Gul MJ. Human-in-the-loop predictive analytics using statistical learning. J Healthc Eng. (2021) 2021:9955635. doi: 10.1155/2021/9955635
30. Staes CJ, Beck AC, Chalkidis G, Scheese CH, Taft T, Guo JW, et al. Design of an interface to communicate artificial intelligence-based prognosis for patients with advanced solid tumors: a user-centered approach. J Am Med Inform Assoc. (2024) 31:174–87. doi: 10.1093/jamia/ocad201
31. Plass M, Kargl M, Kiehl T, Regitnig P, Geißler C, Evans T, et al. Explainability and causability in digital pathology. J Pathol Clin Res. (2023) 9:251–60. doi: 10.1002/cjp2.322
32. Henry KE, Kornfield R, Sridharan A, Linton RC, Groh C, Wang T, et al. Human-machine teaming is key to AI adoption: clinicians’ experiences with a deployed machine learning system. NPJ Digit Med. (2022) 5:97. doi: 10.1038/s41746-022-00597-7
33. Sezgin E. Artificial intelligence in healthcare: complementing, not replacing, doctors and healthcare providers. Digit Health. (2023) 9:20552076231186520. doi: 10.1177/20552076231186520
34. Morley J, Floridi L, Kinsey L, Elhalal A. From what to how: an overview of AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics. (2021) 27:1–26. doi: 10.1007/s11948-019-00165-5
35. Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. (2018) 2:719–31. doi: 10.1038/s41551-018-0305-z
36. Brigden T, Mitchell C, Redrup Hill E, Hall A. Ethical and legal implications of implementing risk algorithms for early detection and screening for oesophageal cancer, now and in the future. PLoS One. (2023) 18:e0293576. doi: 10.1371/journal.pone.0293576
37. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. (2019) 6(2):94–8. doi: 10.7861/futurehosp.6-2-94
38. Ferrara SD, Baccino E, Boscolo-Berto R, Comandè G, Domenici R, Hernandez-Cueto C, et al. Padova Charter on personal injury and damage under civil-tort law: medico-legal guidelines on methods of ascertainment and criteria of evaluation. Int J Legal Med. (2016) 130:1–12. doi: 10.1007/s00414-015-1244-9
39. Terranova C, Bruttocao A. The clinical management of diabetic foot in the elderly and medico-legal implications. Med Sci Law. (2013) 53:187–93. doi: 10.1177/0025802412473595
40. Mezrich JL. Demystifying medico-legal challenges of artificial intelligence applications in molecular imaging and therapy. PET Clin. (2022) 17:41–9. doi: 10.1016/j.cpet.2021.08.002
41. Terranova C, Sartore D, Snenghi R. Death after liposuction: case report and review of the literature. Med Sci Law. (2010) 50:161–3. doi: 10.1258/msl.2010.100010
42. Appelbaum PS. Assessment of patients’ competence to consent to treatment. N Engl J Med. (2007) 357:1834–40. doi: 10.1056/NEJMcp074045
43. Krumholz HM. Informed consent to promote patient-centered care. JAMA (2010) 303:1190–1. doi: 10.1001/jama.2010.309
44. Guerra F, La Rosa P, Guerra F, Raimondi L, Marinozzi S, Miatto I, et al. Risk management for a legally valid informed consent. Clin Ter. (2021) 172:484–8. doi: 10.7417/CT.2021.2361
45. Berg JW, Appelbaum PS, Lidz CW, Parker LS. Informed Consent: Legal Theory and Clinical Practice. 2nd ed. Fair Lawn, NJ: Oxford University Press (2001).
46. Karlawish JH, Kim SY, Knopman D, Van Dyck CH, James BD, Marson D. The views of Alzheimer disease patients and their study partners on proxy consent for clinical trial enrollment. Am J Geriatr Psychiatry. (2008) 16:240–7. doi: 10.1097/JGP.0b013e318162992d
47. Zonjee VJ, Slenders JP, de Beer F, Visser MC, Ter Meulen BC, Van den Berg-Vos RM, et al. Practice variation in the informed consent procedure for thrombolysis in acute ischemic stroke: a survey among neurologists and neurology residents. BMC Med Ethics. (2021) 22:114. doi: 10.1186/s12910-021-00684-6
48. Yin S, Yang Q, Xiong J, Li T, Zhu X. Social support and the incidence of cognitive impairment among older adults in China: findings from the Chinese longitudinal healthy longevity survey study. Front Psychiatry. (2020) 11:254. doi: 10.3389/fpsyt.2020.00254
49. Terranova C, Cardin F, Pietra LD, Zen M, Bruttocao A, Militello C. Ethical and medico-legal implications of capacity of patients in geriatric surgery. Med Sci Law. (2013) 53:166–71. doi: 10.1177/0025802412473963
50. Cohen GI. Informed consent and medical artificial intelligence: what to tell the patient? Georgetown Law J. (2020) 108:1425–69. doi: 10.2139/ssrn.3529576
51. Morley J, Machado CC, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med. (2020) 260:113172. doi: 10.1016/j.socscimed.2020.113172
52. Khanna S, Srivastava S. Patient-centric ethical frameworks for privacy, transparency, and bias awareness in deep learning-based medical systems. Appl Res Artif Intellig Cloud Comput. (2020) 3:16–35.
53. Schiff D, Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA J Ethics. (2019) 21:E138–45. doi: 10.1001/amajethics.2019.138
54. Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Bohr A, Memarzadeh K editors. Artificial Intelligence in Healthcare. New York, NY: Academic Press (2020). p. 25–60. doi: 10.1016/B978-0-12-818438-7.00002-2
55. Gichoya JW, Thomas K, Celi LA, Safdar N, Banerjee I, Banja JD, et al. AI pitfalls and what not to do: mitigating bias in AI. Br J Radiol. (2023) 96:20230023. doi: 10.1259/bjr.20230023
56. Shachar C, Gerke S. Prevention of bias and discrimination in clinical practice algorithms. JAMA. (2023) 329:283–4. doi: 10.1001/jama.2022.23867
57. Di Pietra L, Gardiman M, Terranova C. Postpartum maternal death associated with undiagnosed Hodgkin’s lymphoma. Med Sci Law. (2012) 52:174–7. doi: 10.1258/msl.2012.011137
58. Jassar S, Adams SJ, Zarzeczny A, Burbridge BE. The future of artificial intelligence in medicine: medical-legal considerations for health leaders. Health Manage Forum. (2022) 35:185–9. doi: 10.1177/08404704221082069
59. Stoeger K, Schneeberger D, Kieseberg P, Holzinger A. Legal aspects of data cleansing in medical AI. Comput Law Secur Rev. (2021) 42:105587. doi: 10.1016/j.clsr.2021.105587
60. Krittanawong C, Johnson KW, Rosenson RS, Wang Z, Aydar M, Baber U, et al. Deep learning for cardiovascular medicine: a practical primer. Eur Heart J. (2019) 40:2058C–69C. doi: 10.1093/eurheartj/ehz056
61. Adamson AS, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatol. (2018) 154:1247–8. doi: 10.1001/jamadermatol.2018.2348
62. Banegas-Luna AJ, Peña-García J, Iftene A, Guadagni F, Ferroni P, Scarpato N, et al. Towards the interpretability of machine learning predictions for medical applications targeting personalised therapies: a cancer case survey. Int J Mol Sci. (2021) 22:4394. doi: 10.3390/ijms22094394
63. Hosseinzadeh M, Azhir E, Ahmed OH, Ghafour MY, Ahmed SH, Rahmani AM, et al. Data cleansing mechanisms and approaches for big data analytics: a systematic study. J Ambient Intell Humaniz Comput. (2023) 14:99–111.
64. Gerke S, Babic B, Evgeniou T, Cohen IG. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digit Med. (2020) 3:53. doi: 10.1038/s41746-020-0262-2
65. Char DS, Shah NH, Magnus D. Implementing machine learning in health care—addressing ethical challenges. N Engl J Med. (2018) 378:981–3. doi: 10.1056/NEJMp1714229
66. Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. JAMA. (2017) 318:517–8. doi: 10.1001/jama.2017.7797
67. Obermeyer Z, Powers B, VogelI C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. (2019) 366:447–53. doi: 10.1126/science.aax2342
68. Obermeyer Z, Emanuel EJ. Predicting the future—big data, machine learning, and clinical medicine. N Engl J Med. (2016) 375:1216–9. doi: 10.1056/NEJMp1606181
69. Khosravi P, Huck NA, Shahraki K, Hunter SC, Danza CN, Kim SY, et al. Deep learning approach for differentiating etiologies of pediatric retinal hemorrhages: a multicenter study. Int J Mol Sci. (2023) 24:15105. doi: 10.3390/ijms242015105
70. Reddy S. Explainability and artificial intelligence in medicine. Lancet Digit Health. (2022) 4:e214–5. doi: 10.1016/S2589-7500(22)00029-2
71. Calude CS, Longo G. The deluge of spurious correlations in big data. Found Sci (2017) 22:595–612. doi: 10.1007/s10699-016-9489-4
72. Soliman A, Agvall B, Etminani K, Hamed O, Lingman M. The price of explainability in machine learning models for 100-day readmission prediction in heart failure: retrospective, comparative, machine learning study. J Med Internet Res. (2023) 25:e46934. doi: 10.2196/46934
73. Müller H, Holzinger A, Plass M, Brcic L, Stumptner C, Zatloukal K. Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation. N Biotechnol. (2022) 70:67–72. doi: 10.1016/j.nbt.2022.05.002
74. Katz DM, Bommarito MJ, Blackman J. A general approach for predicting the behavior of the Supreme Court of the United States. PLoS One. (2017) 12:e0174698. doi: 10.1371/journal.pone.0174698
75. Varsha PS. How can we manage biases in artificial intelligence systems – A systematic literature review. Int J Inf Manag Data Insights. (2023) 3:100165.
76. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. (2019) 1:206–15. doi: 10.1038/s42256-019-0048-x
Keywords: artificial intelligence, machine learning, legal medicine, professional liability, tort law, causal relationship
Citation: Terranova C, Cestonaro C, Fava L and Cinquetti A (2024) AI and professional liability assessment in healthcare. A revolution in legal medicine?. Front. Med. 10:1337335. doi: 10.3389/fmed.2023.1337335
Received: 12 November 2023; Accepted: 18 December 2023;
Published: 08 January 2024.
Edited by:
Giovanna Ricci, University of Camerino, ItalyReviewed by:
Simone Grassi, University of Florence, ItalyAntonina Argo, University of Palermo, Italy
Andreas Holzinger, Medical University of Graz, Austria
Copyright © 2024 Terranova, Cestonaro, Fava and Cinquetti. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Claudio Terranova, Y2xhdWRpby50ZXJyYW5vdmFAZ21haWwuY29t