Skip to main content

ORIGINAL RESEARCH article

Front. Public Health
Sec. Digital Public Health
Volume 12 - 2024 | doi: 10.3389/fpubh.2024.1428396
This article is part of the Research Topic AI-Driven Healthcare Delivery, Ageism, and Implications for Older Adults: Emerging Trends and Challenges in Public Health View all 5 articles

Physicians' Ethical Concerns About Artificial Intelligence in Medicine: A Qualitative Study: "The final decision should rest with a human"

Provisionally accepted
Fatma Kahraman Fatma Kahraman 1Aysenur Aktas Aysenur Aktas 1Serra Bayrakceken Serra Bayrakceken 1Tuna Çakar Tuna Çakar 2*Hande Serim Tarcan Hande Serim Tarcan 1Bugrahan Bayram Bugrahan Bayram 3Berk Durak Berk Durak 1Yesim Isil Ulman Yesim Isil Ulman 1
  • 1 Acıbadem University, Istanbul, Türkiye
  • 2 MEF University, Istanbul, Türkiye
  • 3 Kessler Foundation, West Orange, New Jersey, United States

The final, formatted version of the article will be published soon.

    Background/aim: Artificial Intelligence (AI) is the capability of computational systems to perform tasks that require human-like cognitive functions, such as reasoning, learning, and decision-making. Unlike human intelligence, AI does not involve sentience or consciousness but focuses on data processing, pattern recognition, and prediction through algorithms and learned experiences. In healthcare including neuroscience, AI is valuable for improving prevention, diagnosis, prognosis, and surveillance. Methods: This qualitative study aimed to investigate the acceptability of AI in Medicine (AIIM) and to elucidate any technical and scientific, as well as social and ethical issues involved. Twenty-five doctors from various specialties were carefully interviewed regarding their views, experience, knowledge, and attitude towards AI in healthcare. Results: Content analysis confirmed the key ethical principles involved: confidentiality, beneficence, and non-maleficence. Honesty was the least invoked principle. A thematic analysis established four salient topic areas, i.e. advantages, risks, restrictions, and precautions. Alongside the advantages, there were many limitations and risks. The study revealed a perceived need for precautions to be embedded in healthcare policies to counter the risks discussed. These precautions need to be multi-dimensional. Conclusions: The authors conclude that AI should be rationally guided, function transparently, and produce impartial results. It should assist human healthcare professionals collaboratively. This kind of AI will permit fairer, more innovative healthcare which benefits patients and society whilst preserving human dignity. It can foster accuracy and precision in medical practice and reduce the workload by assisting physicians during clinical tasks. AIIM that functions transparently and respects the public interest can be an inspiring scientific innovation for humanity.

    Keywords: artificial intelligence, Medicine, healthcare, Ethics, decision-making, Neuroscience, qualitative research

    Received: 06 May 2024; Accepted: 06 Nov 2024.

    Copyright: © 2024 Kahraman, Aktas, Bayrakceken, Çakar, Tarcan, Bayram, Durak and Ulman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Tuna Çakar, MEF University, Istanbul, Türkiye

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.