- 1Department of Plastic, Hand, and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany
- 2Corporate/M&A Department, Dentons Europe (Germany) GmbH & Co. KG, Munich, Germany
- 3UC Law San Francisco (Formerly UC Hastings), San Francisco, CA, United States
- 4Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians-University Munich, Munich, Germany
- 5College of Medicine, Drexel University, Philadelphia, PA, United States
- 6Carle Illinois College of Medicine, University of Illinois Urbana-Champaign, Urbana, IL, United States
- 7Department of Cardiothoracic and Vascular Surgery, Deutsches Herzzentrum der Charité (DHZC), Berlin, Germany
- 8School of Medicine, University of Pittsburgh, Pittsburgh, PA, United States
- 9Morgan, Lewis & Bockius LLP, Munich, Germany
- 10Faculty of Applied Social and Health Sciences, Regensburg University of Applied Sciences, Regensburg, Germany
Large Language Models (LLMs) like ChatGPT 4 (OpenAI), Claude 2 (Anthropic), and Llama 2 (Meta AI) have emerged as novel technologies to integrate artificial intelligence (AI) into everyday work. LLMs in particular, and AI in general, carry infinite potential to streamline clinical workflows, outsource resource-intensive tasks, and disburden the healthcare system. While a plethora of trials is elucidating the untapped capabilities of this technology, the sheer pace of scientific progress also takes its toll. Legal guidelines hold a key role in regulating upcoming technologies, safeguarding patients, and determining individual and institutional liabilities. To date, there is a paucity of research work delineating the legal regulations of Language Models and AI for clinical scenarios in plastic and reconstructive surgery. This knowledge gap poses the risk of lawsuits and penalties against plastic surgeons. Thus, we aim to provide the first overview of legal guidelines and pitfalls of LLMs and AI for plastic surgeons. Our analysis encompasses models like ChatGPT, Claude 2, and Llama 2, among others, regardless of their closed or open-source nature. Ultimately, this line of research may help clarify the legal responsibilities of plastic surgeons and seamlessly integrate such cutting-edge technologies into the field of PRS.
Introduction
Artificial Intelligence (AI) has witnessed remarkable advancements in recent years, revolutionizing various sectors, including medicine. The integration of AI into healthcare holds immense potential to enhance diagnostic capacity, treatment, and overall patient care (1). AI, encompassing machine learning, natural language processing, and data analytics, has emerged as a valuable tool in medicine. AI algorithms can analyze vast amounts of medical data, detect patterns, and generate insights that assist in diagnosis, treatment planning, and patient monitoring (2–4).
Large Language Models (LLMs) like ChatGPT 4 (OpenAI), Claude 2 (Anthropic), and Llama 2 (Meta AI) represent the most recent use case of AI leveraging natural language processing to autonomously respond to questions and complete tasks (5). For healthcare providers, LLMs have been proposed as valuable tools for interpreting laboratory values, generating novel research ideas, and advancing patient education (6, 7). Overall, AI in general, and LLMs in particular carry the potential to disburden the healthcare system and improve patient care. While numerous trials are elucidating the untapped potential of AI tools for use cases in plastic and reconstructive surgery (PRS), the sheer speed of scientific progress takes its toll, too.
To date, there is a paucity of studies that clarify the legal guidelines when using Large Language Models and AI based tools for PRS scenarios. This knowledge gap poses the jeopardy of lawsuits and penalties against PRS institutions (academic and non-academic hospitals; medical healthcare centers; outpatient surgical centers) and plastic surgeons who already face a 15% risk per year of being sued (8, 9). However, this scarcity of studies affects the entire field of medical healthcare.
Herein, we aim to provide the first summary of US legal guidelines for implementing Large Langue Models like ChatGPT and other AI based tools into PRS everyday work. Ultimately, this line of research may provide a robust legal foundation to facilitate clinical AI use and position PRS at the pole-position of future AI research.
Prescription of drugs or treatments using large language models and other AI-based tools: legal considerations for surgeons
In cases of PRS malpractice, surgeons breach their legal obligations when they fail to meet the standard of care. As of now, there have been no specific court cases directly addressing liability related to the use of LLMs in PRS, primarily due to the novelty of the technology and its ongoing implementation. Consequently, the subsequent analysis is based on the broader application of medical malpractice law.
Elements of PRS malpractice include breach, causation, and damages. At base, physicians have a duty to treat their patients according to the standard of care. The standard of care is understood as the care that would be provided by a competent physician of the same specialty, taking into consideration the resources that are available at the time of patient treatment and/or consultation (10–13). The interplay between artificial intelligence and the standard of care is intricate and expected to evolve over time. Plastic surgeons may face legal consequences when recommending surgical procedures or treatments using ChatGPT or comparable models. A significant concern in using LLMs in healthcare is not only the potential for professionals to misinterpret the model's information but also the risk of LLMs suggesting and disseminating misguided and unreliable surgical or treatment recommendations for patients. Under the current legal framework, a plastic surgeon can be held fully liable in medical malpractice cases when relying on suggestions from a language model.
As a concrete example, consider a plastic surgeon treating a patient for a facial reconstructive procedure. The standard of care is procedure X, a surgical method with known moderate side effects. Another procedure, procedure Y, is approved for non-reconstructive use in trauma patients, but observational studies have shown that it may enhance facial reconstruction significantly. However, procedure Y, displays a potentially higher complication profile and is therefore discouraged for use in facial reconstruction. The surgeon will choose one procedure or the other. The surgeon enters their patient's information into the electronic health record, and an embedded AI system makes a recommendation for a given surgical procedure (10, 14, 15).
Since malpractice law commonly absolves liability for adhering to the standard of care, clients would generally not be justified in blaming surgeons for AI tool usage that aligns with the standard of care, even if the outcome is suboptimal. If the procedure yields positive results, there is clearly no liability since there is no injury. Even if the procedure proves ineffective, surgeons usually remain shielded from liability as long as their actions align with the accepted standard of care.
On the other hand, if a surgeon follows AI's suggestion and performs a procedure that deviates from the standard of care, the likelihood of liability increases. If the chosen procedure is suitable for the patient and no harm occurs, then there is no liability. However, if the chosen procedure is unsuitable for the patient and injury ensues, the surgeon is likely to be held liable for actions that fall below the standard of care, regardless of AI's recommendations. The treating surgeon cannot then exculpate themselves by relying on the recommendation's compliance with the standard of care.
Another point to keep in mind is that the legal standard of care is not static and continually evolves. With the emergence of AI-powered technologies, these advancements may eventually become part of the “standard of care” in PRS. Therefore, it is conceivable that late adapters may risk violating the standard of care if they fail to adopt evidence-based beneficial AI that most other doctors have already accepted and integrated into patient care (12).
Some observers in the field of PRS have highlighted that the current medical malpractice law creates incentives for surgeons to downplay the potential benefits of AI. They argue that, to mitigate liability risks, the safest approach for surgeons is to utilize AI primarily as a “confirmatory tool” that supports existing decision-making processes, rather than viewing it to enhance care (13). In fact, a current study shows that physicians currently may use AI technology the most in “low uncertainty” cases, when they are pretty sure of a prospective treatment plan but avoid using it in higher-uncertainty cases (16).
As AI becomes an integral part of healthcare, it is crucial to address its role in patient consent processes. Discussions about whether patients want AI used in their assessment and treatment need to be incorporated into the routine informed consent process. Even though the patient has consented to a proposed treatment or operation, the failure of the physician to inform the patient adequately before obtaining such consent is negligence and renders the physician subject to liability for any injury resulting from the treatment or operation. Generally, the question of whether information must be disclosed is centered on whether the information would be considered significant by a reasonable person in the patient's position when deciding to accept or reject a recommended medical procedure (17). Following this general guideline, a physician does have a duty to disclose the use of AI-based tools and its extent if the information is material to the patient's decision-making. To be protected from liability and fully respect patient autonomy, hospitals should explicitly mention in their general consent to treatment forms whether and to what extent they use AI assisted tools. More specifically, physicians should explain how these tools contribute to their recommendations. Another important consideration is how much detail physicians need to disclose about using AI tools. Specifically, physicians may wonder if they must explain how the AI tool arrived at its conclusions, the workings of the algorithm, and the data it was trained on. Without specific case law, this question can only be addressed based on current standards. Generally, doctors are not required to explain their entire thought process or the quality of sources they consulted for their decisions. Similarly, detailed explanations of the AI model's inner workings and training data are not usually necessary. However, any known biases in the data that could affect the tool's recommendations should be communicated to the patient.
The utilization of LLMs can lead to further legal risks. For instance, using ChatGPT may result in legal infringements related to the processing of personal data (18). Healthcare providers bear significant responsibility for ensuring AI technologies are deployed ethically and in adherence with regulations protecting patient data confidentiality. In the US, the main federal law governing the privacy and protection of health-related personal data is the Health Insurance Portability and Accountability Act (HIPAA). Under HIPAA guidelines, if covered healthcare providers use AI tools for treatment and discloses patient information in the process, they are required to provide a Notice of Privacy Practices (NPP) to the patient, informing them of this potential use and disclosure of their information (19). Additionally, covered healthcare providers must obtain the individual's written authorization for any use or disclosure of protected health information that is not for treatment, payment, health care operations or otherwise permitted or required by the Privacy Rule (20). It is important to note that the transfer of patient information to ChatGPT or any other chatbot is generally not exempt from this rule. Given that the definition of “treatment” includes the coordination and management of health care services, as well as consultations related to patient care (21), it is conceivable that the exemption under HIPAA for obtaining consent extends to the use of AI tools and chatbots for treatment purposes. However, in the absence of current legislative guidelines, it is advisable to obtain proper consent for the transfer of patient data and to be transparent about how AI is being employed in providing care.
An alternative approach to potentially bypass these regulations involves de-identifying the data before interacting with a language model (22). If the patient information is covered by HIPAA, de-identification of the protected health information (PHI) requires the removal of certain identifiers or an expert's determination that data can be considered de-identified (23). Some observers, however, note that this method does not provide conclusive proof against subsequent re-identification (18). To ensure the highest level of safety, surgeons should refrain from inputting any content into such tools that might contain a patient's personal data, confidential information, or any data that is not meant to be disclosed to third parties.
In the great scheme of liability, it is important to recognize that if ChatGPT provides misleading or inaccurate information, there may not be sufficient grounds to claim against OpenAI. Their terms of service clearly state that liability is excluded to the fullest extent possible. This applies to both product liability claims from patients and indemnification claims from physicians using the AI tool for treatment. Currently, there are no legal precedents on whether these liability limitations for AI use in medical treatment are enforceable. Therefore, in the absence of clear judicial guidance, these limitations should be considered enforceable.
Institutional liability for use of artificial intelligence in plastic and reconstructive surgery
Under which circumstances can hospitals be held liable for adverse events caused by the use of AI (e.g., ChatGPT) during a PRS procedure? One must distinguish two separate theories—derivative liability for the actions of plastic surgeons or others and direct liability for the institution itself.
Derivative liability first requires proving medical malpractice or another form of liability by the plastic surgeon or healthcare provider. Once malpractice is established, legal theories connect this liability to the hospital. Without proven malpractice, the hospital's liability is excluded. The conditions under which medical malpractice liability may be imposed vary based on whether the plastic surgeon is a hospital employee or an independent contractor. Under the doctrine of respondeat superior, employers are liable for employees' actions within the scope of their employment. If a hospital employee misuses AI, such as ChatGPT, resulting in malpractice, the hospital may be liable. This principle extends to those under significant hospital control, even if not formally employed (23).
For independent contractors, the respondeat superior theory does not apply. Instead, liability may arise under the “apparent authority” doctrine. Apparent authority occurs when a third party reasonably believes an individual has authority to act on behalf of another party, based on the principal's representation. To establish apparent authority, two conditions must be met: the principal must have represented the agent as having authority or knowingly allowed the agent to act on its behalf, and the plaintiff must have relied on these representations. In the context of AI in PRS, if an independent contractor plastic surgeon misuses AI and causes harm, and the hospital has presented the surgeon as its agent, leading the patient to reasonably rely on this representation, the hospital may be liable.
Hospitals also bear responsibilities towards their patients that can give rise to direct liability for the institution itself. These legal theories are relevant to the decisions hospitals make regarding the implementation of AI in PRS. Even though there have been no reported decisions yet specifically addressing such scenarios, there are two main hospital direct liability theories that courts may apply to the use of AI in PRS in the future and that can lead to hospitals' direct liability: (i) negligent selection/retention and (ii) negligent supervision.
The theory of negligent selection and retention imposes upon a hospital system a duty to review “surgeons” competency and performance history before admission to the medical staff and periodically thereafter” (24). In order to recover a plaintiff must show that the hospital did not exercise reasonable care, meaning the care ordinarily exercised by the average hospital, to determine whether the surgeon was competent (25). From a plaintiff's standpoint, it could be argued that a hospital system is essentially engaging in a hiring process rather than a simple purchase when acquiring an AI system. Consequently, this would impose responsibilities on the hospital system to assess previous errors resulting in adverse events linked to the use of the AI, review the certification process, ascertain the individuals involved in the certification, evaluate the quality of certification, and potentially determine how the AI system will integrate with the existing hospital workforce, similar to the considerations made when hiring a human surgeon. Courts may deem this theory as going too far in terms of attributing human-like characteristics to AI systems. Even if the theory is endorsed, the practical assessment of negligence relies, to some extent, on comparing the level of care employed in these determinations with that practiced by other hospital systems. This presents a complication, particularly during the initial stages of AI integration in healthcare, where establishing a standard for care becomes problematic (25).
The theory of negligent supervision on the other hand operates under the assumption that the duty lies in the contemporaneous supervision of surgical decisions “as they are being made” (24). In the future, courts may impose a duty upon hospitals to supervise each AI recommendation and/or reliance thereon by a plastic surgeon “as they are made” in addition to the negligent selection/retention duties and whatever derivative liability exists. Although there have been references to such a duty in several rulings, it has predominantly been imposed in cases involving “gross negligence, in which the departure from medical standards is so blatant that it is possible to attribute to hospital administrators’ constructive knowledge of the error in progress” (24). Some observers believe that courts will be more skeptical of negligent supervision theories, especially when it comes to more opaque forms of AI in PRS (26).
Supplementary Material S1 shows a checklist to be used by plastic surgeons and medical academic institutions when implementing AI and chatbots.
Conclusion
In conclusion, the current liability framework for physicians and hospitals regarding the use of AI systems such as ChatGPT in PRS is based on general principles of malpractice liability (Figure 1). However, there is significant uncertainty surrounding how these factors will be interpreted as cases begin to reach the courts. Additionally, legislative, and regulatory interventions could potentially bring about substantial changes in this landscape. Notably, the standard of care is expected to evolve as the use of AI becomes more widely accepted in PRS. However, the pace of adoption is likely to vary across different areas of surgical practice. As we navigate the dynamic intersection of AI and plastic surgery, it is crucial to closely monitor legal developments and anticipate the ongoing evolution of standards and regulations in PRS. Ultimately, these research efforts may place PRS at the forefront of evidence-based and law-compliant AI research.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Author contributions
LK: Writing – review & editing, Writing – original draft. AV: Writing – original draft. MA: Writing – original draft. JC: Writing – original draft. DN: Writing – original draft. AK: Writing – original draft. LP: Writing – original draft. JI: Writing – original draft. JD: Writing – review & editing, Writing – original draft. SH: Writing – original draft. CK: Writing – review & editing, Writing – original draft. SK: Writing – original draft.
Funding
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
Conflict of interest
Author AV was employed by company Dentons Europe. Author SH was employed by company Lewis & Bockius LLP.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fsurg.2024.1390684/full#supplementary-material
Supplementary Material S1
Checklist for the law-compliant use of artificial intelligence and chatbots.
References
1. Knoedler L, Knoedler S, Kauke-Navarro M, Knoedler C, Hoefer S, Baecher H, et al. Three-dimensional medical printing and associated legal issues in plastic surgery: a scoping review. Plast Reconstr Surg Glob Open. (2023) 11:e4965. doi: 10.1097/GOX.0000000000004965
2. Chartier C, Gfrerer L, Knoedler L, Austen WG Jr. Artificial intelligence-enabled evaluation of pain sketches to predict outcomes in headache surgery. Plast Reconstr Surg. (2023) 151:405–11. doi: 10.1097/PRS.0000000000009855
3. Knoedler L, Miragall M, Kauke-Navarro M, Obed D, Bauer M, Tiϐler P, et al. A ready-to-use grading tool for facial palsy examiners-automated grading system in facial palsy patients made easy. J Pers Med. (2022) 12(10):1739. doi: 10.3390/jpm12101739
4. Knoedler L, Baecher H, Kauke-Navarro M, Prantl L, Machens H-G, Scheuermann P, et al. Towards a reliable and rapid automated grading system in facial palsy patients: facial palsy surgery meets computer science. J Clin Med. (2022) 11(17):4998. doi: 10.3390/jcm11174998.36078928
5. Hoch CC, Wollenberg B, Lüers J-C, Knoedler S, Knoedler L, Frank K, et al. ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions. Eur Arch Oto-Rhino-Laryngol. (2023) 280(9):4271–8. doi: 10.1007/s00405-023-08051-4
6. Cadamuro J, Cabitza F, Debeljak Z, De Bruyne S, Frans G, Perez SM, et al. Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European federation of clinical chemistry and laboratory medicine (EFLM) working group on artificial intelligence (WG-AI). Clin Chem Lab Med. (2023) 61:1158–66. doi: 10.1515/cclm-2023-0355
7. Ge J, Lai JC. Artificial intelligence-based text generators in hepatology: ChatGPT is just the beginning. Hepatol Commun. (2023) 7(4):e0097. doi: 10.1097/HC9.0000000000000097
8. Gibstein AR, Jabori SK, Watane A, Slavin BR, Elabd R, Singh D. Do plastic surgery residents get sued? An analysis of malpractice lawsuits. Plast Reconstr Surg Glob Open. (2023) 11:e4721. doi: 10.1097/GOX.0000000000004721
9. Jarvis T, Thornburg D, Rebecca AM, Teven CM. Artificial intelligence in plastic surgery: current applications, future directions, and ethical implications. Plast Reconstr Surg Glob Open. (2020) 8:e3200. doi: 10.1097/GOX.0000000000003200
10. Martinez v. United States, No. 1:16-cv-01556-LJO-SKO, 2019 WL 266213, at *5 (E.D. Cal. January 18, 2019).
11. Griffin F. Artificial intelligence and liability in health care. Health Matrix. (2019) 31:65–98.
12. Paterick ZR, Patel NJ, Ngo E, Chandrasekaran K, Jamil Tajik A, Paterick TE. Medical liability in the electronic medical records era. Proc (Bayl Univ Med Centr). (2018) 31(4):558–61. doi: 10.1080/08998280.2018.1471899
13. Price WN, Gerke S, Cohen IG. Liability for use of artificial intelligence in medicine. Law Econ Work Pap. (2022) 241:4. doi: 10.2139/ssrn.4115538
14. Roe M. Who’s driving that car?: an analysis of regulatory and potential liability frameworks for driverless cars. Boston Coll Law Rev. (2019) 60:317–30.
16. Tinglong D, Shubhranshu S. Artificial intelligence on call: the physician's decision of whether to use AI in clinical practice. Johns Hopkins Carey Business School Research Paper. (2023):22–02. doi: doi: 10.2139/ssrn.3987454
18. Rickert J. On patient safety: the lure of artificial intelligence-are we jeopardizing our Patients’ privacy? Clin Orthop Relat Res. (2020) 478(4):712–4. doi: 10.1097/CORR.0000000000001189
22. Malek LA, Jain P, Johnson J. “Data privacy and artificial intelligence in health care” Reuters (2022). Available online at: https://www.reuters.com/legal/litigation/data-privacy-artificial-intelligence-health-care-2022-03-17/ (accessed 17.06.2023).
23. Scott v SSM Healthcare St Louis, 70 SW 3d 560, 566 (Mo: Court of Appeals, Eastern Dist, 3 Div (2002).
Keywords: ChatGPT, artificial intelligence, AI, chatbots, law, legal, lawsuits, litigation
Citation: Knoedler L, Vogt A, Alfertshofer M, Camacho JM, Najafali D, Kehrer A, Prantl L, Iske J, Dean J, Hoefer S, Knoedler C and Knoedler S (2024) The law code of ChatGPT and artificial intelligence—how to shield plastic surgeons and reconstructive surgeons against Justitia's sword. Front. Surg. 11: 1390684. doi: 10.3389/fsurg.2024.1390684
Received: 23 February 2024; Accepted: 2 July 2024;
Published: 26 July 2024.
Edited by:
Kavit R. Amin, Manchester University NHS Foundation Trust (MFT), United KingdomReviewed by:
Sergio M. Navarro, Mayo Clinic, United States© 2024 Knoedler, Vogt, Alfertshofer, Camacho, Najafali, Kehrer, Prantl, Iske, Dean, Hoefer, Knoedler and Knoedler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Leonard Knoedler, bGVvbmFyZC5rbm9lZGxlckB1ci5kZQ==