Skip to main content

EDITORIAL article

Front. Hum. Dyn., 22 November 2022
Sec. Digital Impacts
This article is part of the Research Topic Human and Artificial Collaboration for Medical Best Practices View all 6 articles

Editorial: Human and artificial collaboration for medical best practices

  • 1Naver Labs Europe, Meylan, France
  • 2Stake Lab, University of Molise, Campobasso, Italy

Healthcare is one of the sectors where Artificial Intelligence (AI) is creating great expectations. This is mainly through machine learning (ML) algorithms for diagnostic and prognostic purposes. The matter is very topical because, in addition to issues related to the adoption of AI in the healthcare industry, it can highlight matters of general interest in the collaboration between human intelligence, embodied by the medical staff, and machine intelligence, provided by AI algorithms and methodologies. The indications thus acquired could be transferred to other contexts in which this collaboration is underway; building on these assumptions, this Frontiers in Human Dynamics Research Topic helps to increase the understanding of the issues surrounding the use of AI in healthcare through two types of articles: one consisting of case studies and the other discussing methodologies suitable for the design of such systems.

The first case study illustrates a natural evolution, namely the integration of Artificial Intelligence with telemedicine, i.e., the remote management of patient care. One relevant aspect of telemedicine concerns the use of telecommunication to support tasks such as diagnosing and monitoring patient recovery and health conditions by transmitting biometric data from wearable or remote devices, such as pulse monitors or blood pressure cuffs, to allow medical staff to oversee patients in real time in hospital facilities or in their homes. The decision-making processes associated with the availability of this data can now be enhanced through ML techniques. An example of this is provided by the system described in the article ATTICUS: Ambient-Intelligent Tele-monitoring and Telemetry for Incepting and Catering Over Human Sustainability, whose design and implementation took place in the context of the ATTICUS project funded by the Italian Ministry of University and Research (Laudato et al.). ATTICUS hinges on the following two components: (i) an intelligent wearable device, in the form of a short vest made of innovative fabric, to acquire body signals in real-time (electrocardiogram, respiratory waves, temperature); (ii) a distributed Decision Support System (DSS) integrating advanced machine learning methods to automate the detection of anomalies. As data from the wearable device are pipelined into the DSS, they provide customized and specialized check-ups, and standard monitoring functions at home and in the hospital, thus showing that feeding biometric information into a data-crunching example of AI such as ML is, as expected, very effective.

However, ML is only one of the many contributions of AI to the healthcare sector. Another example is offered by virtual assistants, the subject of the second case study. Specialized virtual assistants have been experimented with and applied to the tourism and culture industry, for example, in the context of museum visits. The use of virtual assistants for therapeutic purposes is not new if we include psychotherapy. There are, in fact, effective and practical uses of artificial assistants that employ advanced techniques from natural language processing, such as sentiment analysis and topic modeling and help to diagnose patients with psychological issues such as post-traumatic stress disorder and bipolar disorder. The extensibility of the implementation of virtual assistants in the healthcare sector beyond psychological conditions is explored in the article 2Vita-B Physical: An Intelligent Home Rehabilitation System Based on Microsoft Azure Kinect, which illustrates 2Vita-B Physical, a virtual assistant for the rehabilitation from orthopedic traumas designed to support both the patients, by guiding them in the correct execution of exercises, as well as physical therapists, by letting them monitor their patients' progress remotely (Antico et al.). As opposed to those assistants operating in a psychological setting, the interaction with users in this instance takes place on a visual rather than a conversational level. To this end, 2Vita-B works by acquiring the movements to be performed for therapeutic purposes through the Microsoft Kinect Azure DK application, providing both a visual guide to the actions to be completed and a measure of the level of their deviation from the correct execution. The article highlights the significant benefits deriving from the adoption of the system on a large scale, in part already supported by the experiments carried out, and further broadens the expectation that virtual assistants will be used in the medical field.

The remaining articles address the design aspects of the developable systems by tackling their ethics and usability issues. In addition to enabling new possibilities in the execution of healthcare, AI systems can raise ethical and societal concerns from direct stakeholders, such as patients and doctors in healthcare environments, and indirect stakeholders, such as politicians or the general media. These concerns include data security, biases, cost-benefit debates, technical dependencies, and technical supremacy. A socio-technical approach suggests a promising path for the implementation of systems capable of mapping abstract guidelines on real cases that involve multiple stakeholders and their specific tensions around the relevant values at stake. The effort towards AI ethics is motivated by the principles we want to underlie a technology and is mainly driven by the progress in bioethics in the past few years.

Several institutions have engaged in this effort: professional bodies such as the ACM and the IEEE, both dealing with the promotion of standards1, governments in their role of defining and enacting regulations and legislation, and researchers and experts in computer science, law, and philosophy who are contributing to the discussion on a technical level. The guidelines thus provided describe general principles that should be respected but do not provide solutions to real problems. Following the formulation of regulations, it is necessary to indicate how they can be applied to concrete cases of application of AI technology in the healthcare field.

In this respect, the two articles in this issue, On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls and Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier, examine specific use cases and transfer general principles into practical methods to solve the problems identified therein by settling interpretive tensions regarding the meaning of such principles for the different stakeholders involved (Zicari, Brusseau et al.; Zicari, Ahmed et al.). This approach can identify the perspectives of all primary and secondary stakeholders, i.e., the patients and the doctors at the primary level, but also the health of the population (represented by organizations like the WHO) and AI engineers at a secondary level. This examination of the different points of view and associated tensions yields answers that are not univocal and require a practical review of benefits and risks. For example, identifying and quantifying what can be considered harmful and in the patients' best interests is a multi-factorial effort: e.g., while the early identification of a possible malignant melanoma is undoubtedly an extra element to protect the patient's health, making it available through AI self-examination practices must be adequately considered.

Alongside ethics, human-computer interaction is equally relevant for the effective design of AI-based healthcare systems. Many existing solutions focus on computational power alone without taking into account the need to streamline the interaction between humans and an artificial agent; if provided with excess complexity, there is a danger of processed information being ignored or under-used by those it is intended for. While solidly rooted in the disciplinary field of AI, the contribution in this issue, Arbor: A Transparent, Simple AI Tool for Constructing Fast-and-frugal Classification and Diagnostic Trees in the Medical Domain, aims to limit such risks by leveraging simplicity in the representation of decision-making processes. The article illustrates how, following methodological indications of Bounded Rationality, the Arbor tool maps Bayesian inferencing, a probabilistic technique widely used in AI-based diagnostic systems, into decision trees easily understandable by medical staff unaccustomed to probabilistic reasoning.

As editors of this Research Topic, we are grateful to the authors for the contributions put forward in their articles, which we hope will be helpful to readers interested in the application of AI to healthcare. This will continue to grow in the future and it deserves a closer multidisciplinary examination, considering the various ethical, legal, and social issues that need to be addressed in addition to the technical ones.

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^AI (HLEG) High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI [Text]. European Commission. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Keywords: Artificial Intelligence, healthcare, ethics, design methodology, medical practice

Citation: Grasso M-A and Pareschi R (2022) Editorial: Human and artificial collaboration for medical best practices. Front. Hum. Dyn. 4:1056997. doi: 10.3389/fhumd.2022.1056997

Received: 29 September 2022; Accepted: 04 November 2022;
Published: 22 November 2022.

Edited and reviewed by: Peter David Tolmie, University of Siegen, Germany

Copyright © 2022 Grasso and Pareschi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Remo Pareschi, remo.pareschi@unimol.it

These authors have contributed equally to this work

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.