Innovation and practice in medicine and healthcare increasingly rely on integrating and interpreting a large variety of medical (e.g. genomics or other omics data, imaging data) and other personal data. This trend of datafication offers significant potential for understanding health and illness, developing more personalized treatments, improving disease prevention and providing healthcare more efficiently. In this context, artificial intelligence (AI) using new methods such as machine learning (ML) and deep learning (DL) plays an increasingly important role, e.g. for developing predictive risk scores, phenotyping cancers, diagnosing rare diseases or even designing molecular pharmaceuticals. In addition to these novel techniques, algorithmically controlled, automated decision making (ADM) or decision support systems, i.e. procedures that delegate decisions to other entities which then perform an action based on automatically executed decision-making models, are increasingly used in the medical and healthcare setting.
Scholarship regarding ethical, legal and social issues of AI in data-intensive medicine and healthcare has highlighted numerous areas of contention, including transparency and explainability, privacy and data protection, trust, biases, and how AI might affect the patient-doctor relationship as well as support interdisciplinary expert teams in their decisions. Aiming to extend this perspective, this Research Topic focuses on AI applications with or without ADM in various areas of data-intensive innovative medicine, such as genomics, oncology, intensive care, elderly care, infectious management, neuroscience, psychiatry, allocation of care and reproductive medicine. We seek contributions that explore whether and how ethical and societal considerations can/should be part of AI and ADM, e.g. by considering diversity issues, the significance of datafication and automation, public and patient participation, developing deliberative or open science approaches (such as open codes etc.), and by ensuring interoperability among many developers and users, avoiding at the same time misuse, hacking or manipulation. Another goal is to examine ethical challenges raised by extending ADM from diagnostics to treatment decision making and how to bridge the gap between diagnosis and treatment. We invite scholars to submit theoretical ethical papers, case studies on engagement and diversity approaches, cultural reflections on misuse and hacking, as well as socio-empirical analyses of current developments from the perspective of professionals, patients, or citizens.
Innovation and practice in medicine and healthcare increasingly rely on integrating and interpreting a large variety of medical (e.g. genomics or other omics data, imaging data) and other personal data. This trend of datafication offers significant potential for understanding health and illness, developing more personalized treatments, improving disease prevention and providing healthcare more efficiently. In this context, artificial intelligence (AI) using new methods such as machine learning (ML) and deep learning (DL) plays an increasingly important role, e.g. for developing predictive risk scores, phenotyping cancers, diagnosing rare diseases or even designing molecular pharmaceuticals. In addition to these novel techniques, algorithmically controlled, automated decision making (ADM) or decision support systems, i.e. procedures that delegate decisions to other entities which then perform an action based on automatically executed decision-making models, are increasingly used in the medical and healthcare setting.
Scholarship regarding ethical, legal and social issues of AI in data-intensive medicine and healthcare has highlighted numerous areas of contention, including transparency and explainability, privacy and data protection, trust, biases, and how AI might affect the patient-doctor relationship as well as support interdisciplinary expert teams in their decisions. Aiming to extend this perspective, this Research Topic focuses on AI applications with or without ADM in various areas of data-intensive innovative medicine, such as genomics, oncology, intensive care, elderly care, infectious management, neuroscience, psychiatry, allocation of care and reproductive medicine. We seek contributions that explore whether and how ethical and societal considerations can/should be part of AI and ADM, e.g. by considering diversity issues, the significance of datafication and automation, public and patient participation, developing deliberative or open science approaches (such as open codes etc.), and by ensuring interoperability among many developers and users, avoiding at the same time misuse, hacking or manipulation. Another goal is to examine ethical challenges raised by extending ADM from diagnostics to treatment decision making and how to bridge the gap between diagnosis and treatment. We invite scholars to submit theoretical ethical papers, case studies on engagement and diversity approaches, cultural reflections on misuse and hacking, as well as socio-empirical analyses of current developments from the perspective of professionals, patients, or citizens.