Skip to main content

REVIEW article

Front. Med., 08 January 2024
Sec. Regulatory Science
This article is part of the Research Topic Healthcare in the age of sapient machines: physician decision-making autonomy faced with artificial intelligence. Ethical, deontological and compensatory aspects View all 7 articles

The ménage à trois of healthcare: the actors in after-AI era under patient consent

  • Department of Diagnostics and Public Health, Section of Forensic Medicine, University of Verona, Verona, Italy

Introduction: Artificial intelligence has become an increasingly powerful technological instrument in recent years, revolutionizing many sectors, including public health. Its use in this field will inevitably change clinical practice, the patient-caregiver relationship and the concept of the diagnosis and treatment pathway, affecting the balance between the patient’s right to self-determination and health, and thus leading to an evolution of the concept of informed consent. The aim was to characterize the guidelines for the use of artificial intelligence, its areas of application and the relevant legislation, to propose guiding principles for the design of optimal informed consent for its use.

Materials and methods: A classic review by keywords on the main search engines was conducted. An analysis of the guidelines and regulations issued by scientific authorities and legal bodies on the use of artificial intelligence in public health was carried out.

Results: The current areas of application of this technology were highlighted, divided into sectors, its impact on them, as well as a summary of current guidelines and legislation.

Discussion: The ethical implications of artificial intelligence in the health care system were assessed, particularly regarding the therapeutic alliance between doctor and patient, and the balance between the right to self-determination and health. Finally, given the evolution of informed consent in relation to the use of this new technology, seven guiding principles were proposed to guarantee the right to the most informed consent or dissent.

Introduction

The authors of this article believe it is necessary to pose an initial axiomatic consideration from which the subsequent reasoning can then be developed: “artificial intelligence is already a current reality, destined to become an integral part of the care process for doctor and patient, so it cannot be scotomized.”

Due to the multiple fields of application and underlying methods, it is not possible to date to give an unambiguous definition of artificial intelligence (A.I.) (1). In generic terms, A.I. is an iterative learning model based on the acquisition of big data that leads to the development of interpretations, predictive models and decision-making processes, not based on a priori mechanisms or dependent on third-party intervention (1). A specific definition of AI in a recommendation of the Council on Artificial Intelligence of the OECD states, “An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” (2). Machine learning is defined as the ability of the machine to learn without being programmed in advance (3). This process is the basis for the development of so-called A.I. based prediction models (AIPM), which are models that provide probabilistic predictions and outcomes after certain inputs have been provided (4).

As a testament to A.I. as an established reality and not a mere future possibility, suffice it to consider that ChatGPT, since its release in November 2022, with 100 million users in 2 months has represented an unprecedented spread in the technology world (5).

The areas of A.I. use, both in the public and private sectors, are many and rapidly increasing, to the point that, in part because of the scope and implications of this tool, the High-level expert group on artificial intelligence has been set up by the European Commission 2020 (1).

This unprecedented technological tool has already seen its reflection in multiple fields (e.g., finance, data processing, word processing and document analysis), including healthcare, laying the groundwork for a revolution in the system of care. Suffice it to say how several studies have tested AI’s diagnostic-interpretive capabilities in radiology, noting that these are equal to, if not superior to, those of experts in the field (69).

Moreover, the current era is experiencing a phase of innovation and implementation not only limited to the process of care, but also seeing the tools used in the same involved.

In fact, the technological revolution is also being reflected in the pharmaceutical and healthcare device sectors. In the former case, AI is already being applied in new drug discovery and development, clinical and nonclinical research processes, and post-marketing safety monitoring (10). In the second case, there is evidence that more and more devices using machine learning-based technology are being approved by the U.S. Food and Drug Administration (FDA) (11).

The relationship of care between physician and patient has undergone profound change over time, partly secondary to technological development and social and cultural changes over the centuries. From the earliest days of medicine, said relationship was characterized by a hierarchical setting, which saw the act of care and consequently the figure of the one who provided the care (priest, shaman and later doctor) as imposed on the patient, regardless of his or her will or opinion on the matter, in order to ensure the patient’s health or the good of the community. In this context, knowledge and choices regarding treatment were not accessible to the sick person, since, according to a paternalistic approach to the care relationship, he or she passively followed the course of care according to a pattern of vertical imposition (12). A pivotal example of this system is represented by Hippocratic medicine, where the explicit consent of the patient was not required, but the establishment, by implication, of a fiduciary relationship with the caregiver and in his or her presumed ability to be able to provide the necessary care was sufficient, believing that the principle of beneficence took priority over that of autonomy. After centuries, the doctor-patient relationship has evolved, transcending from the inherent subservience to care, with the end of paternalism and the emergence of the concept of “therapeutic alliance” (13), in which the value of autonomous decision-making on the part of the patient is affirmed, thus integrating the right to health with the right to self-determination.

In all the historical phases just described, at any rate, the relationship between patient and caregiver was a bipolar, uni- or bidirectional interaction between two human being or sentient entities. With the introduction of AI, however, this relationship is bound to change, with the addition of a third actor, the AI: this inevitably results in a paradigm shift, as the process of patient caretaking and the care pathway will be characterized by a triangulated interactive dynamic.

The advent of this third actor in the era of the therapeutic and post-paternalistic alliance implies inevitable repercussions on the foundational elements of the contemporary doctor-patient relationship: access to care and informed consent/dissent to care by the patient.

The concept of informed consent was introduced in the early twentieth century, when Judge Cardozo expressed himself regarding the patient’s right to self-determination, affirming the right for every adult with common-sense to dispose of his or her own body (14). In the years to come, the jurisprudential-ethical entity of informed consent further took root in the Nuremberg Code (1947), in which the principles underlying the lawfulness of health treatments and clinical trials are expressed, as well as further in the Declaration of Helsinki (1964), concerning medical research. This historical development reached synthesis with the drafting of the well-known Oviedo Convention (Convention of Human Rights No. 164), in which, Art. 5 states the following: “An intervention in the health field may only be carried out after the person concerned has given free and informed consent to it. This person shall beforehand be given appropriate information as to the purpose and nature of the intervention as well as on its consequences and risks. The person concerned may freely withdraw consent at any time” (15).

Already, therefore, the issue that appears necessary to be addressed is how and whether the patient will consent to AI co-participation in his or her course of care.

The aim of the classic review was to characterize the current role of AI in public health, as well as its future implications, by analyzing current areas of application, regulatory guidelines for use and current relevant legislation, to understand the actual interactions of AI with the doctor and the patient. This served as a basis for proposing key principles for informed patient consent to the use of AI.

Materials and methods

A classic review of the scientific literature was conducted, using the main search engines such as Pubmed and Google Scholar. Keywords used included: “A.I.,” “informed consent,” “guidelines,” “machine learning,” “healthcare,” “medical devices,” “therapeutic alliance.” Subsequently, documents issued by national and international institutional control bodies as sources have been analyzed, in order to study the current guidelines and regulations in the field of the use of AI in public health, released by: FDA, World Health Organization (WHO), Health Canada, United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA), American Medical Informatics Association (AMIA), International Coalition of Medicines Regulatory Authorities (ICMRA) and the European Parliament.

Results

The classic literature review conducted on Pubmed and Google Scholar highlighted the areas of current application of AI, studies evaluating the efficacy and impact of its use, and the ethical implications of its presence in public health. The analysis of institutional sources (FDA, WHO, Health Canada, MHRA, AMIA, ICMRA and European Parliament) highlighted current guidelines on the design, use and monitoring of AI in public health, as well as current regulatory legislation.

Those are the results obtained from the review of the relevant scientific literature, as well as what emerged from the analysis of the main guidelines found and the relevant legislation.

As shown in Figure 1, by analyzing FDA databases, it was found that, to date, 521 biomedical devices have been approved in various application areas, including: 4 devices in anesthesiology; 1 dental; 3 general hospital; 14 neurology; 1 orthopedic; 57 cardiovascular; 6 in gastroenterology and urology; 15 hematology; 1 obstetrics and gynecology; 4 pathology; 6 clinical chemistry; 5 general and plastic surgery; 5 microbiology; 7 ophthalmology; and finally, a particular significance is observed in the approved devices in radiology, amounting to 392 (75% of total devices) (16).

FIGURE 1
www.frontiersin.org

Figure 1. Enabled medical devices approved by FDA up to October 5, 2022.

From a study by de Hond et al. (4) existing guidelines and quality criteria regarding the development, evaluation and implementation phases of AIPMs were extrapolated and resumed in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Stages of AIPM development, evaluation and implementation.

They analyze best practices to be applied in the development of AIPMs in order to reduce the introduction of systematic bias, so as to optimize the yield and consequent benefits of applying these models in clinical practice.

Guiding principles proposed by the world’s leading institutional bodies for the development and use in healthcare of AI were also identified.

WHO compiled the first global report on AI in health (17), within which laws, policies and principles that apply to use of artificial intelligence for health are analyzed, as well as the ethical principles underlying its use, namely, “Protect autonomy,” “Promote human well-being, human safety and the public interest,” “Ensure transparency, explainability and intelligibility,” “Foster responsibility and accountability,” “Ensure inclusiveness and equity,” and “Promote artificial intelligence that is responsive and sustainable.”

The shared work operated by FDA, Health Canada, MHRA resulted in the identification of 10 guiding principles for the development of Good Machine Learning Practice (GMLP) (18), as shown in Table 2.

TABLE 2
www.frontiersin.org

Table 2. Guiding principles for the development of GMLP.

Badal K. et al. (19), collected guidance from regulatory principles (Table 3) so far produced by the FDA, Health Canada (19), WHO (18), and AMIA (20).

TABLE 3
www.frontiersin.org

Table 3. Regulatory principles produced by FDA, Health Canada, WHO, AMIA.

ICMRA has also proposed general and specific recommendations for the EU (1) on how AI development and implementation monitoring activity should be exercised by specially created institutional bodies.

Specifically, the recommendations are synthetized in Tables 4, 5.

TABLE 4
www.frontiersin.org

Table 4. ICMRA general recommendations.

TABLE 5
www.frontiersin.org

Table 5. ICMRA recommendations for EU.

In June 2023, the European Parliament voted with a strong majority in favor of the Artificial Intelligence Act. The goal is to ensure compliance with the EU’s core values in the context of the use of AI, particularly the safety of users, respect for their privacy, and transparency and non-discrimination (21). It is clear from the AI Act that EU’s position is to consider medical devices implemented with AI as high-risk, as enunciated in Title III “High-Risk AI System,” a category encompassing all those technologies that may adversely affect fundamental human rights and, therefore, require stricter regulation by the relevant bodies.

The AI Act also provides in Title IV, Art. 52 “Transparency obligations for certain AI systems,” specifically the text states:

1. “Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorized by law to detect, prevent, investigate and prosecute criminal offenses, unless those systems are available for the public to report a criminal offense.”

2. “Users of an emotion recognition system or a biometric categorization system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorization, which are permitted by law to detect, prevent and investigate criminal offenses.”

3. “Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fakes’), shall disclose that the content has been artificially generated or manipulated. However, the first subparagraph shall not apply where the use is authorized by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.”

Discussion

The system of care nowadays has to consider, based on the highlighted elements, two factors: (1) the therapeutic alliance between patient and caregiver, which sees its foundation in informed consent; (2) the intervention in the process of a third actor, the AI.

Informed consent represents the synthesis of two fundamental human rights, such as the right to health and the right to self-determination. Over the years, as a result of technological and scientific innovations, as well as cultural and social changes, the intrinsic nature of the relationship of care between doctor and patient has changed radically, transitioning from a paternalistic type of relationship, in which the doctor stood as the sole holder of decision-making power in the care path of his patient, to a true therapeutic alliance between doctor and patient, in which the latter’s decision-making autonomy takes on a fundamental role, thus becoming an active part in the care decision-making process. The main characteristics of informed consent to be effectively valid, include: the personalization of consent relative to the case in question; freedom on the part of the patient in accepting or rejecting the proposed treatment; completeness in the information provided, which must also be up-to-date; ease of understanding on the part of the patient of what is set forth, making the information comprehensible to the user; and the possibility of withdrawal of consent at any time during the course of care on the part of the patient.

Considering the above premises, for the advent of AI to actually materialize and integrate into the system of care, it must necessarily be accepted not only by the scientific community, but also by the individual patient, as a new tool potentially employed, therefore, its integration into the informed consent proposal is also essential.

Since informed consent is personal and therefore customized to the specific patient’s care pathway, it is necessary to highlight how the intervention of AI in the same pathway must also be considered in the individual steps proposed. However, this inevitably introduces several critical issues and questions, secondary to the inherent characteristics of AI, this being a rapidly advancing technology and above all whose mechanisms of operation are not in themselves characterizable at every single stage of its operation. In fact, one of the main problems in the use of AI in the context of informed consent is that of the “black box,” defined as the impossible transparency of the container, the AI. This makes its internal mechanisms, i.e., the learned patterns, not definable a priori, non-visible, which depending on the input provided and the phenomena experienced, can lead to different and unpredictable outcomes. This type of tool, characterized by an inherent non-intelligibility of operation together with the increasing level of autonomy, offers a different instrument from what has been used in health care to date, potentially increasing the standard of care, however, not completely predictable, thus laying the groundwork for an ethical and medico-legal dilemma between ensuring adherence to better standards and, on the other hand, the impossibility of completely controlling the machine, with possible repercussions on users (22).

It should also be pointed out that since the algorithms underlying AI are a product resulting from the activity of human beings, they may have acquired biases (defined as systematic errors in its outputs or processes (23, 24)), resulting from the socio-cultural heritage of the producer or from mere methodological errors during the design phase (25). While the use of AI can implement treatments and make them more accessible, it can also reinforce already existing disparities, perpetuating and reinforcing according to the learning model the biases inherent in the initial input provided by the source data (26). If a populational subgroup, for heterogeneous reasons, rejects the use of AI in its care pathway, the same technology not only could not be offered to these users, but would scotomize that group from the distribution analysis of the study variables. Therefore, this would lead to an exclusion/selection bias, subsequently potentially amplified by the autonomous learning mechanisms of the machine itself. So, dissent to the use of AI by patients could reverberate into a systematic selection bias with a distortion from the true representation of the epidemiological characteristics of the population under study, ultimately resulting in potential erroneous conclusions even in the field of medical research (27).

It seems appropriate to consider how AI can concretely integrate into the proposal for informed consent to treatment, responding to the dilemma between the patient’s self-determination and his right to health and the best available care.

It will be necessary to integrate AI in every step of the care pathway, from history collection to objective assessment, as well as in the clinical, laboratory, and instrumental diagnostic pathway, in therapeutic procedures (pharmacological, interventional, and/or surgical), and in the definition of prognosis and follow-up pathways.

After an exhaustive explanation of the possible uses of AI and how it works, at every possible node of the pathway of care and treatment the patient must have the option to choose whether to avail him of it or to renounce it, even at the expense of not having the opportunity to take advantage of the best standards of care. This possibility should be posed on a per-act basis, so as to maintain the personalization of consent and not have to accept or opt out tout court of the presence of the third-party actor. In these terms, it seems useful to propose guiding principles for optimal writing of informed consent in the new model of care toward which we are inevitably moving.

Key proposals for informed consent in the era of triangular therapeutic alliance between physician, patient, and artificial intelligence:

1. The patient, consistent with the nature of “black boxes,” needs to understand what AI is and how it works.

2. The possibility of withdrawal of consent at any time and optimal privacy management must be guaranteed; the data used must not be traceable to the patient unless explicitly requested by the patient.

3. It must be defined in which nodes AI intervention is proposed, and the patient must be able to choose in which of these to accept or reject it.

4. The role of AI in each individual node must be identified, breaking it down into types of activities performed and level of autonomy in managing them.

5. The consequences of accepting or rejecting AI in each individual treatment step must be made explicit.

6. During each medical act, the patient should be accompanied, explaining to him which activities are performed by the AI and which by the Physician, as well as their respective roles.

7. Adequately trained individuals should be provided to cooperate in drafting and administering consent, technical-procedural explanation, as well as lending assistance in case of ethical dilemmas.

As shown in Table 6, those are some examples for each of the above key points:

TABLE 6
www.frontiersin.org

Table 6. Examples for key points.

Artificial intelligence is a current reality, destined to become an integral part of the treatment process for doctor and patient, therefore, since it cannot be scotomized, the challenging goal will be to explain its nature and applications to the patient to ensure consent/dissent, which cannot be inherently informed regarding the inner workings of AI but rather must consider its impact in the possibility of treatment. The goal, then, to be pursued progressively and collectively with the inevitable technological development, guaranteeing both the right to Health and Self-determination, is the building of a therapeutic alliance between physician, patient and AI.

Finally, two questions arise spontaneously, only partially provocative:

• Will some physicians refuse the use of AI for ethical reasons?

• Will the physician ever risk being excluded from this new triangular therapeutic alliance?

Author contributions

RS: Conceptualization, Writing – original draft. RT: Conceptualization, Writing – original draft. FA: Writing – review & editing. ST: Writing – review & editing. DD: Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. ICMRA. (2021). "Informal innovation network horizon scanning assessment report - artificial intelligence ".

Google Scholar

2. OECD. Recommendation of the council on artificial intelligence (OECD legal instruments. OECD/ LEGAL/0449). Paris: Organization for Economic Co-operation and Development (2019).

Google Scholar

3. Samuel, AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. (2000) 44:206–26. doi: 10.1147/rd.441.0206

CrossRef Full Text | Google Scholar

4. de Hond, AAH, Leeuwenberg, AM, Hooft, L, Kant, IMJ, Nijman, SWJ, van Os, HJA, et al. Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review. NPJ digital medicine. (2022) 5:2. doi: 10.1038/s41746-021-00549-7

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Meskó, B, and Topol, EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ digital medicine. (2023) 6:120. doi: 10.1038/s41746-023-00873-0

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Gulshan, V, Peng, L, Coram, M, Stumpe, MC, Wu, D, Narayanaswamy, A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. (2016) 316:2402–10. doi: 10.1001/jama.2016.17216

CrossRef Full Text | Google Scholar

7. Esteva, A, Kuprel, B, Novoa, RA, Ko, J, Swetter, SM, Blau, HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. (2017) 542:115–8. doi: 10.1038/nature21056

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Rajpurkar, P, Irvin, J, Ball, RL, Zhu, K, Yang, B, Mehta, H, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. (2018) 15:e1002686. doi: 10.1371/journal.pmed.1002686

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Hannun, AY, Rajpurkar, P, Haghpanahi, M, Tison, GH, Bourn, C, Turakhia, MP, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. (2019) 25:65–9. doi: 10.1038/s41591-018-0268-3

PubMed Abstract | CrossRef Full Text | Google Scholar

10. U.S. Food & Drug Administration. (2023). “Using artificial intelligence & machine learning in the development of Drug & Biological Products, Discussion paper and request for feedback”.

Google Scholar

11. Benjamens, S, Dhunnoo, P, and Meskó, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ digital medicine. (2020) 3:118. doi: 10.1038/s41746-020-00324-0

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Magner, L, and Kim, O. A history of medicine. US: CRC press (2018).

Google Scholar

13. Baier, AL, Kline, AC, and Feeny, NC. Therapeutic alliance as a mediator of change: a systematic review and evaluation of research. Clin Psychol Rev. (2020) 82:101921. doi: 10.1016/j.cpr.2020.101921

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Faden, RR, and Beauchamp, TL. A history and theory of informed consent. Oxford: Oxford University Press (1986).

Google Scholar

15. Council of Europe, Oviedo. (1997). "Convention for the protection of human rights and dignity of the human being with regard to the application of biology and medicine: Convention on human rights and biomedicine ".

Google Scholar

17. World Health Organization. (2021). "Ethics and governance of artificial intelligence for health: WHO guidance ".

Google Scholar

18. U.S. Food & Drug Administration, Health Canada, Medicines and Healthcare products Regulatory Agency. (2021). “Good machine learning practice for medical device development: Guiding principles ”.

Google Scholar

19. Badal, K, Lee, CM, and Esserman, LJ. Guiding principles for the responsible development of artificial intelligence tools for healthcare. Commun Med. (2023) 3:47. doi: 10.1038/s43856-023-00279-9

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Solomonides, AE, Koski, E, Atabaki, SM, Weinberg, S, McGreevey, JD, Kannry, JL, et al. Defining AMIA's artificial intelligence principles. J American Medical Informatics Association: JAMIA. (2022) 29:585–91. doi: 10.1093/jamia/ocac006

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Jassar, S, Adams, SJ, Zarzeczny, A, and Burbridge, BE. The future of artificial intelligence in medicine: medical-legal considerations for health leaders. Healthc Manage Forum. (2022) 35:185–9. doi: 10.1177/08404704221082069

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Fletcher, RR, Nakeshimana, A, and Olubeko, O. Addressing fairness, Bias, and appropriate use of artificial intelligence and machine learning in Global Health. Front artificial intelligence. (2021) 3:561802. doi: 10.3389/frai.2020.561802

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Mehrabi, N, Morstatter, F, Saxena, N, Lerman, K, and Galstyan, A. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR). (2021) 54:1–35. doi: 10.1145/3457607

CrossRef Full Text | Google Scholar

25. Vicente, L, and Matute, H. Humans inherit artificial intelligence biases. Sci Rep. (2023) 13:15737. doi: 10.1038/s41598-023-42384-8

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Larrazabal, AJ, Nieto, N, Peterson, V, Milone, DH, and Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc Natl Acad Sci U S A. (2020) 117:12592–4. doi: 10.1073/pnas.1919012117

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Astromskė, K, Peičius, E, and Astromskis, P. Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI Soc. (2021) 36:509–20. doi: 10.1007/s00146-020-01008-9

CrossRef Full Text | Google Scholar

Keywords: artificial intelligence, informed consent, therapeutic alliance, patient-caregiver relationship, medical ethics, patient autonomy

Citation: Saccà R, Turrini R, Ausania F, Turrina S and De Leo D (2024) The ménage à trois of healthcare: the actors in after-AI era under patient consent. Front. Med. 10:1329087. doi: 10.3389/fmed.2023.1329087

Received: 27 October 2023; Accepted: 27 December 2023;
Published: 08 January 2024.

Edited by:

Filippo Gibelli, University of Camerino, Italy

Reviewed by:

Andrea Verzeletti, University of Brescia, Italy
Antonina Argo, University of Palermo, Italy

Copyright © 2024 Saccà, Turrini, Ausania, Turrina and De Leo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Riccardo Saccà, cmljY2FyZG8uc2FjY2FAdW5pdnIuaXQ=; Rachele Turrini, cmFjaGVsZS50dXJyaW5pQHVuaXZyLml0

These authors have contributed equally to this work and share first authorship

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.