- 1Research Support Northern Ireland, Downpatrick, United Kingdom
- 2Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
- 3Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
- 4Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
- 5Iheed, Dublin, Ireland
- 6Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
- 7Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
Background: Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored.
Objectives: The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare?
Materials and methods: A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011–2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice.
Results: Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan–Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships.
Conclusion: There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships.
Implications: In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Highlights
- Artificial intelligence (AI) refers to computer systems that are designed to think or act like humans (human approach) and systems that think or act rationally (rational approach). However, current thinking about AI falls short of understanding the underlying motivational systems of thinking and acting like a human (e.g., compassion) or the influence of such motivational systems on complex societal systems (e.g., healthcare).
- Exploration of the associations between AI technologies and compassion have been hindered by two widely held assumptions (1) that compassion is a felt emotion in the body produced through relationships and social dynamics, and (2) that technologies are objective and incapable of compassion (again an assumption based on a view that compassion only involves felt emotion). Although it is debated whether AI can feel or express genuine empathy, compassion is different because it is a system.
- The literature shows that AI technologies can be (a) individually programmed (i.e., “build compassion in”) to mimic elements of human compassion (e.g., emotion detection, affective response, empathetic display, socio-cultural intelligence) to various degrees of authenticity and success, and (b) be used collectively within a system of healthcare to enhance compassion (e.g., increasing empathetic awareness, assessing needs in high-risk patient groups, understanding the person), (i.e., “use it for compassion”).
- Compassion can be conceptualized as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. Future research into these elements could develop new and novel approaches to human-AI intelligent caring.
1. Introduction
Artificial intelligence (AI) refers to computer systems that are designed to think or act like humans (human approach) and systems that think or act rationally (rational approach) (Russell and Novig, 2020). This article uses a systematic scoping review of the literature to explore the possible association between AI technologies and compassion in healthcare. This topic relates to current debates about the way AI might be perceived or imagined to be caring (De Togni et al., 2021) or compassionate (Day et al., 2021).
Exploring the possible association between AI and compassion is important because AI mediates every area of healthcare systems (e.g., complex systems involving purchasers, providers, payers, patients, and so on) by powering search engines, analysing data and making recommendations (Bajwa et al., 2021), as well as through clinical and health-related applications (Davenport and Kalakota, 2019). AI can be incredibly powerful for processing (e.g., using pattern recognition or predictive capabilities) “big data,” which refers to the masses of data that are increasingly readily available in society through digital devices (Topol, 2019). Machine learning is the most common form of AI and largely relies on supervised learning, when computers are trained with labels decided by humans. Deep learning and adversarial learning involve training on unlabeled data to reveal underlying patterns (e.g., algorithms are used to find clusters or variances in data) (see1). However, current thinking about AI falls short of understanding the underlying motivational systems of thinking and acting like a human (e.g., compassion) or the influence of such motivational systems on complex societal systems (e.g., healthcare).
Yet current research shows that AI technologies (i.e., AI-driven machines, devices, programs, or applications) influence not only how humans think and act but how healthcare professionals work and learn (Bin Kamarudin and Zary, 2019) (“healthcare professionals” is used here to mean the wide range of trained professionals that deliver clinical treatments and care e.g., medical, surgical, nursing, professions allied to medicine, mental health professionals, and so on, rather than the broader health professions, general managers, administrative staff etc.). For example, by informing more accurate imaging and diagnosis (Nichols et al., 2019), improving the efficiency of clinical screening (Grzybowski et al., 2020), enabling personalized medicine (Schork, 2019), and precision medicine that is tailored to individual patient needs (Mesko, 2017; Chang, 2020). Within healthcare organizations, AI may support improved productivity, workload, performance, teamwork, and satisfaction (Hazarika, 2020; Morley et al., 2020). Patients will increasingly experience new e-health (electronic health) applications in clinical settings (Lupton, 2017), in their own homes and mhealth (mobile health) applications in their lives (Torous et al., 2018). So, to explore how AI technologies might support compassion in healthcare systems it is important to look more deeply at what compassion is.
Compassion has been described as a sensitivity to suffering in self and others, with a commitment to try and alleviate and prevent it (Gilbert, 2014). It is perceived to be an evolutionary survival feature of a social species, which promotes helpful caring behaviors in an interconnected field of social relations, steered by ethical values and social norms (Goetz et al., 2010; Gilbert, 2019). Compassionate behavior is modeled and learnt through human interactions, such as parenting and teaching (Goetz et al., 2010). Compassion research demonstrates how the psychology of compassion in the mind (experiencing or witnessing helpful interactions) (Walter et al., 2015) affects the body, improves human health (Kim et al., 2009), and benefits societies (Seppälä et al., 2017). Yet, compassion is under conceptualized and underexplored in relation to AI technologies (Bloom, 2016; Kerasidou, 2020) or the question of how AI technologies might be used to generate or enhance compassion (Day et al., 2021). Exploration of the associations between AI technologies and compassion have been hindered by two widely held assumptions (1) that compassion is a felt emotion in the body produced through relationships and social dynamics and (2) that technologies are objective and incapable of compassion (White and Katsuno, 2019) (again an assumption based on a view that compassion only involves felt emotion). Although it is debated whether AI can feel or express genuine empathy (Montemayor et al., 2021), compassion is different because it is a motivational caring system (Gilbert, 2019).
In healthcare contexts there is considerable interest in compassion for ethical and clinical reasons (Fotaki, 2015; Papadopoulos and Ali, 2016). Compassion is described as a “medical virtue” (De Bakey, 2018), a “virtuous response” (Sinclair et al., 2016a,b) or “intelligent kindness” (Gallagher and Wainwright, 2005). Compassion is an expectation of recruitment to healthcare jobs (Straughair, 2019); a component of ethical professional practice (Flores and Brown, 2018); an indicator of healthcare quality (Sinclair et al., 2017; Durkin et al., 2018; Clavelle et al., 2019; Thomas and Hazif-Thomas, 2020; Baguley et al., 2022); and a dynamic interactional experience that includes motivation, capacity, and connection (Uygur et al., 2019). Compassionate caregiving has been described as involving meaningful actions to alleviate suffering and meet individual needs and prevent further suffering (Durkin et al., 2021). Compassionate behaviors (Straughair, 2019) are taught through pedagogy (Hendry, 2019), learning objectives (Lown, 2016; Sinclair et al., 2021; Wang et al., 2022) assessment (Lown et al., 2016), and skills sets such as reflective listening (Braillon and Taiebi, 2020; Su et al., 2021). Healthcare research has examined compassion from the perspective of: the predictors of compassion in healthcare professionals (Fernando and Consedine, 2014; Bleiker et al., 2020; Pavlova et al., 2022); how care environment and organizational culture affect compassion (Casagrande, 2016; Ali and Terry, 2017; Dev et al., 2019; Tehranineshat et al., 2019; Wiljer et al., 2019; Ali et al., 2022); compassion-maintaining strategies and interventions (Blomberg et al., 2016; Terry et al., 2017; Flores and Brown, 2018; Baguley et al., 2020; Hopkins et al., 2021; Malenfant et al., 2022); compassionate leadership (Dewar and Cook, 2014; de Zulueta, 2015; Lown et al., 2019; West et al., 2020); and regulation of compassionate caregiving (Harris et al., 2019; Pedersen and Roelsgaard Obling, 2019). Culturally and critically informed perspectives of compassion highlight that different societies, professional groups, cultures, and generations hold different expectations and views about compassion (Koopmann-Holm and Tsai, 2017; Sundus et al., 2020) which change over time (Salvador Zaragozá et al., 2021). Compassion has been described as a lens for critically considering the cultural and social significance of AI technologies and the different ways that such technologies may serve or disserve the societies that created them (White and Katsuno, 2019) including how technologies affect their users (Day et al., 2021).
In recent years some AI technologists and researchers have become interested in how AI technologies might demonstrate caring or be caring (De Togni et al., 2021). Artificial empathy refers to the coding of empathy into machines (Dial, 2018) whereby emotion recognition and display technologies are designed to sense and/or show a sense of empathy in their users e.g., giving life-like virtual agents the capabilities to mimic user’s facial expressions. However, technologies that appear to be empathetic may not necessarily be genuine or authentic empathy (Montemayor et al., 2021). A machine capable of artificial compassion requires more than emotion recognition and expression (Mason, 2015). Artificial compassion refers to the steps that technologists may take to intentionally design adaptive responsiveness into technologies (Critchley, 2015). For example, building cognitive architecture (a control loop that the computer runs through) that Sense-{Think + Feel}-React (Mason, 2015, 2021, 2023). In this type of computing the ability to “think” and “feel” are made possible by connecting to external reference points such as information in the cloud, or other agents, to develop a form of socio-cultural intelligence (Mason, 2021). Not all technologies need these types of “in-built” compassion in their programming but these developments in AI systems will influence societal systems.
This article draws on different fields of systems thinking (Dori and Sillitto, 2017) to explore the associations between the types of systems involved. That is, AI technologies as computational systems (e.g., machine learning, deep-learning, algorithms, network systems etc.); compassion as a motivational caring system (Motivational Systems Theory) (Ford, 1992); and healthcare systems as complex adaptive systems (Complex Adaptive Systems Theory) (Lansing, 2003; Levin, 2003). Motivation is thought to be at the heart of many of society’s and healthcare’s most pervasive and enduring problems (Ford, 1992) (e.g., the “care gap”). These perspectives enabled this review to explore issues about the way technologies are imagined and used, and their capabilities to alleviate suffering through compassion.
1.1. Rationale
Advances in AI technologies and research on compassion have seen significant development and progress in recent years. However, understandings about possible associations between AI technologies and compassion are emergent and under conceptualized. It is unclear what type of AI technologies can be designed and used to enhance compassion in healthcare.
Understanding any associations between AI technologies and compassion is important in a western healthcare context that is characterized by numerous politicized issues about supply-demand-challenges in healthcare associated with a clinically complex aging population (Tiersen et al., 2021), historical under resourcing in some health services, and the COVID-19 crisis (Pagliari, 2021). These challenges have been described as a growing “care gap” (Davendralingam et al., 2017). There is also an apparent deficit or lack of compassion in healthcare systems: notions of the “compassion gap” (Trzeciak et al., 2017), or “crisis in caring” with suggestions there is “empathy erosion” or an “empathy deficit” (Stenhouse et al., 2016). The Francis Report (Mid Staffordshire NHS Foundation Trust Public Inquiry, 2013) revealed sub-standard patient care and increased mortality rates in UK hospitals to show the devastating effects of practicing medicine without compassion (Gray and Cox, 2015). Consequently, multiple “compassion cultivation” programs and initiatives such as empathy training have been implemented in health services and staff training (Davendralingam et al., 2017). Other related issues include “compassion fatigue” (Sheppard, 2015; Figley and Regan Figley, 2017), staff resilience and staff burnout (Stevenson et al., 2022), and the “pure hard slog” of caregiving roles (Bogossian et al., 2014). Issues about the human cost of emotional labor (Larson and Yao, 2005) are reflected in “compassionomics”: The study of the effects of compassionate healthcare for patients, healthcare systems, payers, and providers (Trzeciak et al., 2017). This context also includes issues about the prevalence of workplace discrimination and violence in healthcare (Greinacher et al., 2022), intention to leave (Greinacher et al., 2022), COVID-19 related “compassion collapse” (Hagman et al., 2022), as well as staff experiences of “compassion satisfaction” (enjoyment, reward, and passion for work) (Okoli et al., 2020; Baqeas et al., 2021; Qu et al., 2022; Unjai et al., 2022). Other research has investigated “compassion inequalities,” which refers to differentials in patient treatment and care associated with stigmatized health conditions such as opioid use disorder (Singh et al., 2021). These issues set an important but complex context for exploring how AI technologies might be used to address some of the real-world “caring problems” in healthcare systems.
Current conceptualizations of compassion are limited by the fact that they do not consider the possibility of AI technologies as tools for compassion (except for artificial compassion, Mason, 2021, 2023). Compassion science mainly focuses on the bodily (psychological and neurobiological) and behavioral elements of compassion (Kim et al., 2020; Goldberg, 2020) and the effects of oxytocin in the body (Brown and Brown, 2015; Palgi et al., 2016; Seppälä et al., 2017). There is growing evidence about self-compassion and compassionate touch interventions (Bond, 2002; Field, 2014; Serpa et al., 2021), self-care interventions (Ehret et al., 2015; Friis et al., 2016; Brown et al., 2020), professionals’ self-care and self-compassion and compassion for others (Mills et al., 2017); and resilience in caring roles (Bleazard, 2020; Baqeas et al., 2021). Compassion is often used interchangeably with the notion of empathy (Håkansson Eklund and Summer Meranius, 2021); previously defined as a person’s ability to sense another’s thoughts, feelings, and experiences, to share the other’s emotional experience, and to react to the observed experiences of another person (Wieseke et al., 2012). However, compassion is different to empathy. Compassion refers to not only a sensitivity to suffering, but the commitment to try to alleviate and prevent it, i.e., a caring motivational system (Gilbert, 2019).
Understanding any potential of AI technologies to enhance compassion could help to respond to many different concerns about modern technologies in healthcare. Issues that include the safe and ethical use of information and communication technologies as clinical devices (Lupton, 2017), “data entry burden”(Dragano and Lunau, 2020), and “digital tick-boxing” associated with electronic health records (Collier, 2017); information overload and “doctor as machine” (Padmanabhan, 2020); screen fatigue associated with telemedicine and device use (Alameddine et al., 2019); “digital distraction,” frequent prompts and interruptions to care that affect service safety and quality; “technostress” (La Torre et al., 2020) when technologies don’t meet expectations creating negative feelings or behaviors; “disinhibition effect” (Terry and Cain, 2016) associated with online settings that can include “cyberbullying” (Hutson et al., 2018); “digital exclusion” and “digital inequalities” (Crotty and Somai, 2022); maintaining human connection with mediated communication (Nieto et al., 2021); as well as concerns about the safe, ethical and fair uses of AI technologies (Buolamwini, 2017; Figueroa et al., 2021; Martinez-Martin et al., 2021; Oszutowska-Mazurek et al., 2021; Suresh and Guttag, 2021; Tschandl, 2021; Schmidt et al., 2022).
1.2. Objectives
The objective of this scoping review was to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion to inform future research and practice.
1.3. Approach
The approach was to undertake a scoping review of the topic using a recognized framework and process. We used the approach originally proposed by Arksey and O’Malley (2005), further enhanced by the work of Levac et al. (2010) and consolidated in the Joanna Brigs Institute (JBI) approach to the conduct of scoping reviews (Peters et al., 2020). Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) guidelines and 20 essential item checklist (Tricco et al., 2018) from the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network.
Scoping reviews are useful for examining emerging evidence when it is still unclear what other, more specific questions can be posed for evidence syntheses and valuably addressed (Mays et al., 2001). Unlike a systematic review, scoping reviews do not tend to produce and report results that have been synthesized from multiple evidence sources following a formal process of methodological appraisal to judge the quality of the evidence (Peters et al., 2020). Rather, scoping reviews follow a systematic approach to map evidence on a topic and identify main concepts, theories, sources, and knowledge gaps (Tricco et al., 2018).
Five main stages of the review process (Arksey and O’Malley, 2005) were:
(1) identifying the research question.
(2) identifying relevant studies.
(3) study selection.
(4) charting the data.
(5) collating, summarizing, and reporting the results.
Each stage was informed by the team’s multidisciplinary expertise and understanding from fields of nursing, medicine, anthropology, health service research, AI strategy, and AI technology design. Our working methods were to use online meetings for discussions (via Microsoft Teams) supported by sharing files, articles, and comments (using Miro whiteboard and file share software).
Four review questions were developed to reflect the aims:
1. How is compassion discussed in relation to AI technologies in healthcare? For example, different schools of thought, controversies, or perspectives.
2. How are AI technologies being used to enhance compassion in healthcare? For example, professional practice, education and learning, clinical care, or health care delivery or outcomes.
3. What are the gaps in current knowledge and unexplored potential? For example, are there uncertainties, problematic concepts, or a lack of empirical research.
4. What are the key areas where AI technologies could support compassion in healthcare? For example, suggestions or claims for how AI technologies may support compassion in healthcare in the future.
1.4. Definitions and scope
To explore possible associations between AI technologies and compassion in healthcare a broad scope of the review was defined according to three concept constructs, explained below.
1.4.1. AI technologies construct
A comprehensive list of key terms for the searches was generated by drawing on existing definitions of AI (Russell and Novig, 2020), subject indexing for artificial intelligence (National Library of Medicine), knowledge of the team (CM and MR), search terms used in a previous review of AI technologies in mental health (Zidaru et al., 2021) and digital health interventions (Boucher et al., 2021; Table 1). The terms did not include issues or factors relating to digital health (Lupton, 2017), patient consent, data sharing, electronic health records see (de Zulueta, 2021), remote healthcare delivery, internet-based modes of health information delivery, digital health platforms, web-based health interventions, online health clinics, virtual visits/care or telemedicine, or telehealth.
Table 1 shows the key search terms that were developed for each construct and used in the electronic searches.
1.4.2. Compassion construct
As the aim of this review was to focus on the concept of “compassion” in relation to AI technologies the compassion construct for the searches used key terms that are most associated with compassion in the literature (these are “compassion” and “empathy”). Medical subject classification terms were not available for the term “compassion” (MeSH index compassion under Empathy), so key terms were identified from the literature on compassion (previously described in Section “1.1 Rationale”). We also decided to include terms for “self-compassion” and “compassion fatigue” to explore any association between AI technologies and these perspectives of compassion which are important in healthcare. It was also important to develop a working definition of compassion to support the screening and thematic analysis stages of the review, by drawing on existing literature on compassion. Although diverse perspectives and understandings of compassion exist, there is a degree of commonality around the notion of compassion as a prosocial/caring motivational system (Seppälä et al., 2017; Leffel et al., 2018; Gilbert, 2019). Expert consensus is that compassion has 5 component elements (Strauss et al., 2016): (1) recognizing suffering, (2) understanding the universality of suffering in human experience, (3) emotionally connecting with the person in distress, (4) tolerating uncomfortable feelings so that we are able to help, and (5) being motivated to act or acting to help/alleviate suffering (Gu et al., 2017). The present review drew on these understandings to create a working definition of compassion as involving:
(1) Awareness of suffering (e.g., pain, distress, risk, disadvantage).
(2) Understanding the suffering (significance, context, rights, responsibilities etc.).
(3) Connecting with the suffering (e.g., verbal, physical, signs and symbols).
(4) Making a judgment about the suffering (the need to act).
(5) Engaging in a behavior with the intention to alleviate the suffering.
Within this working definition, “suffering” is used to include notions of pain, distress, risk, and disadvantage in healthcare contexts (e.g., physical, or mental pain), as well as more broadly to include suffering associated with risks to health, hardship, social disadvantage (social determinants of health) (Braveman and Gottlieb, 2014), barriers to healthcare (Powell et al., 2016), and health inequalities (Scambler, 2012). This definition acknowledges that health and suffering extend beyond the provision of clinical treatment and clinical care, e.g., through actions to protect human rights, minimize risk to human lives, or promote health equality, for example.
1.4.3. Healthcare construct
Healthcare was defined as a complex adaptive system: a complex dynamic network of interactions that might not always be predictable or perceivable to those within it (Cillers, 1998). It is adaptive, in that, the individual and collective behavior can alter and self-organize corresponding to internal or external micro-events or combined events (Lansing, 2003). Thus, the approach to the searches was to use broad key word terms (“health care,” “healthcare,” “health-care”) as a strategy to include articles relating to any groups of health professionals, different settings/fields (e.g., primary, acute, intermediate care, care homes, educational settings), and all groups of patients, carers. Different forms of the term “healthcare” are used in the literature and internationally, so variations of the term (i.e. single word, phrase, hyphenated) were used to ensure the searches could retrieve all relevant articles.
The AI technologies construct is defined in more specific terms, compared to the more general terms used to define the compassion and healthcare constructs. This is because the concept of compassion is itself complex, in that multiple understandings, perspectives and definitions of this term exist. Thus, in this review we needed to focus (specificity) on the concept of “compassion” to perform a meaningful exploration of how this concept is understood and used in relation to AI technologies. It was appropriate to use a general healthcare construct, a very broad definition, to set a wide context for the searches. Thus, the construct covers healthcare systems, health service organizations, as well as treatment and care provided by healthcare professionals.
2. Materials and methods
In accordance with PRISMA-ScR guidelines on the presentation of scoping reviews the methods explain the eligibility criteria, information sources, search strategy and selection of sources of evidence (inclusion/exclusion criteria), key search terms, data charting process, data items, critical appraisal, synthesis of results, reliability, and rigor. A review protocol was not developed or published for this scoping review, which follows JBI methodology (Peters et al., 2020).
2.1. Eligibility criteria (inclusions/exclusions)
The review is inclusive of all literature published in English language (articles written in other languages were included if published translations were available). Owing to the newness of the topic, we limited years considered to publication in the last 10 years (2011–2022).
Inclusion/exclusions were:
• Articles published in English between 2011 and the date of the searches (August 2022) were included.
• Included articles were research articles (using any type of study designs or research methods), evaluations or design studies, discussion/commentary, case studies, conference/symposia. Comments on articles were excluded.
• Publication status included articlesx published early online or online only. No unpublished articles were included.
• Included articles described or closely relate to the design, implementation, use, views, or perception of AI technologies (as defined above). Articles relating to “non-AI” technologies (e.g., electronic health records, information communication technologies, social media, online simulation training) were excluded.
• Included articles related to compassion (according to the key search terms above). Other related concepts and terms (dignity, sympathy, kindness, altruism, solidarity) were not included.
• Included articles related to healthcare contexts (any healthcare settings, health professional groups, patient or client groups, students in training), any type of healthcare interventions or practices including self-compassion. Articles outside of healthcare contexts were excluded (i.e., animal health, farming, engineering, architecture, meteorology).
2.2. Search process
Preliminary searches were undertaken (using Google search) in September-December 2021 to inform the review topic and questions. The final searches were conducted in August 2022. Information sources were (1) Web of Science (Science Citation Index, Social Sciences Citation Index, Arts and Humanities Citation Index, Conference Proceedings Citation Index, Book Citation Index, Emerging Sources Citation Index, covering over 12,000 high impact journals) (2) PubMed (covering biomedical literature from MEDLINE, life science journals, and books). These sources were chosen because they index extensive health and healthcare research journals as well as computing, data science, information technology, and design sciences. No other sources were used as the low specificity of the searches would have rendered an unfeasible number of returns for screening (Peters et al., 2020). The searches were performed by two experienced researchers (EM and TZ).
Table 2 presents summary information about the electronic searches and results for Web of Science and PubMed databases (Table 2). The table sets out how the constructs were searched using OR and combined using AND functions. The much larger number of articles returned by Web of Science for the AI technologies construct reflects the scope of this database beyond medicine and healthcare. The compassion construct and the healthcare construct retrieved similar numbers of articles for both databases.
2.3. Screening
A total of 3,124 articles were identified (Web of Science 1,312 articles, PubMed 1,812 articles). The screening process was to systematically assess eligibility of each article by reading the title and abstract of all returned articles and applying inclusion/exclusion criteria. If articles were considered eligible for inclusion the full article was accessed online. A record of the reasons for exclusion of articles was maintained to support rigor and reliability. Figure 1 illustrates the screening process and information about the article type of the 197 included articles.
2.4. Data charting
Data from included articles were extracted to bespoke data tables (using Microsoft Word) designed to hold data about the article and content items relating to the four review questions. Tables were piloted with 10 articles; small adjustments to headings and formatting were made. Categories of data that were extracted were (1) Reference: Author/Date of publication (and citation) (2) Type of article (Categories I–VI, see below) (3) Aim/Context (e.g., healthcare issue/setting etc.) (4) Key relevant findings (5) Country (based on first author). Data charting was done by one experienced qualitative researcher (EM). No other processes were used to obtain or confirm data from investigators/authors. Key findings/points from articles were identified from abstracts and/or full texts where relevant to the review questions. As this was a scoping review not a systematic review of research evidence, selection of key information did not give weighting to research articles or aim to combine value outcomes from research studies. Information about research methods and participants was captured when relevant to contextualize key findings. For design studies information about specific AI technologies was captured where available.
2.5. Analysis
The analysis of included articles used a basic assessment of quality using article type. During charting of the data article type was coded (Article Type: I-Systematic review, II-Research studies and study protocol, III-Review of literature/policy/practice, IV-Discussion chapters/report/opinion piece, V-Conference paper/workshop/symposia, and VI-Design study and service improvement) to gain an overview of the data not to make judgments about research quality or to combine evidence of outcomes. Thematic analysis used an inductive-deductive approach (Mays et al., 2001) to generate categories from the review questions and the data itself. The thematic analysis began with “familiarization” to build up an understanding of the nature and content of included articles, “identification” of emerging themes during the process of data extraction, and “synthesizing” key findings or issues. A multidisciplinary team assessed 52 emerging themes for resonance and relevance to research and practice, which were refined into 24 themes.
2.6. Reliability and rigor
A multidisciplinary team ensured that the topic and focus of the review had resonance with the challenges and problems in their areas of practice. The review used an established review process (Arksey and O’Malley, 2005). Reliability of the search process was supported by using defined search terms and using two robust sources of data for a comprehensive search of published literature (Peters et al., 2020). Rigor of screening was supported by using defined inclusion/exclusion criteria, consistency of screening decisions, and maintaining a record of the reasons for exclusion (detail in Figure 1). A record of duplicates within sources and between data sources was maintained. Rigor in the identification of themes (Mays et al., 2001) was supported by team discussions and reflections on resonance and meaning of emerging themes and relevance to the review questions. For transparency information about all included articles is provided (Supplementary Appendix 1).
3. Results
3.1. Overview of the included literature
3.1.1. Included articles
The searches identified 3,124 potentially relevant articles. All were assessed for eligibility based on titles/abstracts. A total of 197 relevant articles were included (literature tables in Supplementary Appendix 1). Total included articles = 197 (6.3% of 3,124 screened).
3.1.2. Year of publication
The number of articles increased steadily over 10 years: 2011 (n = 1), 2012 (n = 1), 2013 (n = 3), 2014 (n = 2), 2015 (n = 4), 2016 (n = 6), 2017 (n = 15), 2018 (n = 15), 2019 (n = 34), 2020 (n = 32), and 2021 (n = 47). From Jan-Aug 2022 (n = 35 articles).
3.1.3. Article types
Approximately a third of the articles were categorized as VI-Design studies (30.9%, n = 61) (concept development, proof of concept, design evaluation, and service improvement). Nearly a third of the articles were II-Research study (n = 56, 28.4%) (including experimental/intervention studies, qualitative research, survey research, mixed methods, exploratory, pilot, and feasibility studies). Other categories were IV-Discussion (chapter, commentary, perspective, and opinion piece) (n = 45, 22.3%), III-Reviews (integrative review, narrative review, literature review, and scoping reviews) (n = 26), V-Conference (paper, symposia, workshop) (n = 8), I-Systematic review (n = 1).
3.1.4. Article country (first author)
A third of the articles were from United States = 65 articles (32.9%), with United Kingdom = 24 articles (12.1%), Canada = 17 articles (8.4%), Netherlands = 10 articles (4.9%), Australia = 10 articles (4.9%), New Zealand = 10 articles (4.9%), Germany = 6 articles, Japan = 6 articles, Italy = 5 articles, and Taiwan = 4 articles. Articles from other countries (40, 19.9%) were: 3 articles each from France, India, Republic of Ireland, Spain, Sweden, Switzerland, 2 articles each from Bangladesh, Korea, Norway, Pakistan, Philippines, Singapore, 1 article each from Belgium, China, Denmark, Finland, Greece, Lebanon, Malaysia, Qatar, Republic of Korea, Romania.
3.1.5. Research articles
The review identified one systematic review (Bevilacqua et al., 2020) on personal health coaching. Of the 56 articles that were classified as II-Research study, more than two-thirds were studies using an intervention or experimental methods (n = 42, 73.6%) including one randomized controlled trial (RCT) in an educational setting (Johanson et al., 2019). 11 were qualitative studies (interviews, survey research, focus groups, consensus building approaches, ethnography). Other methods were mixed methods (1), feasibility study (1), pilot study (1).
3.1.6. Frequency of key words
Articles most frequently mentioned the term “empathy” (113 articles). Nearly a quarter of the articles used the term “compassion” in their title or abstract (41 articles). Few articles used the term “self-compassion” (10 articles) or “compassion fatigue” (2 articles).
3.1.7. Themes in the data
There are four overarching themes relating to the review questions and 24 themes, as illustrated by Table 3.
3.2. Developments and debates
3.2.1. Concerns about AI ethics, healthcare jobs, and loss of empathy (25 articles)
This was the strongest theme of the literature and conveyed manifold concerns about AI ethics and regulation (Zelmer et al., 2018; Abdullah et al., 2021); ethical design and use of AI technologies in healthcare contexts (Sikstrom et al., 2022); concerns about data privacy, data biases and data collection (Harris, 2021; Ostherr, 2022); as well as concerns about trust, care quality, and liability (Davenport and Kalakota, 2019; Sanal et al., 2019). There is a strong anticipation perspective relating to concerns about role replacement (Johnston, 2018; Blease et al., 2019; Bridge and Bridge, 2019; Powell, 2019; Blease et al., 2020; Doraiswamy et al., 2020; Alrassi et al., 2021) and which parts of healthcare practice, can and should be entrusted to AI technologies (Loftus et al., 2020; Nadin, 2020). Concerns about role replacement discuss the enduring role of critical human attributes for safe and effective healthcare (Joda et al., 2020; Irfan, 2021). Speculation about the replacement of nurses with robot nurses has led to theoretical development on the interrelationship of technological competency as caring and acknowledgment that AI technologies are already fundamental to the delivery of high-quality healthcare (Locsin, 2017; Buchanan et al., 2020). Research on patient’s views about future uses of AI technologies echoes professional’s concerns regarding trust, communication, regulation, liability risks, cyber-security, accuracy, and loss of human empathy toward patients (Slomian et al., 2017; Esmaeilzadeh et al., 2021; Raja et al., 2021; Zhang et al., 2021; Visram et al., 2022).
3.2.2. Human centered design of AI technologies for healthcare (16 articles)
The second strongest theme of this literature reflects broader debates about design ethics and using human-centered design approaches (HCD) to generate empathetic technological responses to health needs (Portz et al., 2020). In HCD processes designers are felt to gain empathetic understanding by working closely with end users, such as stroke patients to co-design AI technologies to support health and recovery (Willems et al., 2021). User-centered participatory design methods (e.g., interviews, workshops, trials of prototypes) narrow the gap between designers and users by supporting inclusion and engagement in the design process (Hou et al., 2020; Tiersen et al., 2021). For example, user-centered research with 15 people after stroke, led to the idea and creation of a character Stappy for a meaningful interface to support empathy in the use of a sensor-feedback system that enables stroke patients to walk (Jie et al., 2020). Research using co-design methods with young people with type 1 diabetes exposed a radically different view of technology than either their parents or practitioners, illustrating the need to involve target end-users in design (Pulman et al., 2013). This literature suggests HCD supports compassion in healthcare by creating methods and opportunities for inclusion in the design of technologies that address real and significant needs in people’s lives (McCarthy et al., 2020; Majid et al., 2021) as well as promoting trust that empathy will be preserved and acceptance of new AI technologies in a healthcare space (Zhang et al., 2021). HCD to develop an electronic crutch for paralyzed people has been described as a humanitarian project designed with empathy for patients in mind (Sarkar et al., 2020). HCD informs humanitarian applications of AI technologies (Fernandez-Luque and Imran, 2018); the design of “positive technology” to generate motivation and engagement (Riva et al., 2016); and “transformative technologies” to facilitate positive, enduring transformation of the self-world for the benefit of health and wellbeing (Riva et al., 2016). HCD embeds compassion within AI technology design by recognizing and engaging with human suffering, now or in the future (i.e., maintaining health) (Fritzsche et al., 2021), activities to co-design technological solutions that have utility and value for users (Mirkovic et al., 2018; Raman and McClelland, 2019); and ethical attention to when technology might not be a suitable solution (Pulman et al., 2013).
3.2.3. Optimistic speculation AI technologies will address care gaps (12 articles)
There is hope in this literature, that AI technologies can preserve the “spirit” of welfare state and the principles of risks-sharing and equal access to care for all (Weil-Dubuc, 2019). Literature on social robots argues for the potential social utility of robots as treatment providers, custodial caregivers, social assistants, and home companions (Pedersen et al., 2018). Health professionals are hopeful that e-mental health technologies may offer a solution to the growing problem of unmet mental health needs, provided that human centered principles are maintained (Strudwick et al., 2020). VR technology and research on implicit bias are perceived to be tools to address bias, prejudice, cultural insensitivity, eroding levels of empathy, and social disparities of health (Jones-Schenk, 2016). In these discussions there is a collective aspiration for AI to reflect human wisdom in the provision of more compassionate (Lee et al., 2021; Ali et al., 2022) and “compassionomic” solutions to healthcare (i.e., safe and cost effective) (Trzeciak et al., 2017). Other expressions of optimism relate to the hope of improvements in service efficiency and quality (Blease et al., 2019; Kemp et al., 2020); entrepreneurial opportunities (Shepherd and Majchrzak, 2022); and the design of AI technologies that can encourage collective good and increase prosocial behavior (Day et al., 2021).
3.2.4. Interrogation of what it means to be human and to care (11 articles)
Discussion of the complexity of interwoven “gossamer threads” of disparate, conflicting information about technologies in society raises questions about human development and empathetic response (Bjorklund, 2016). Research on transhumanism and posthumanism has explored the idea of self, soul, and human consciousness and what makes humans human (Fleury-Perkins and Paris, 2019; Ajeesh and Rukmini, 2022). Suggestions that AI and humans can create harmonious bios built on bioethical human properties, attitudes, and virtues (Sass, 2014), have been expressed creatively in medical arts with particular emphasis on preserving, or indeed enhancing, “3Cs” of communication, compassion, and competence (Yaghy et al., 2019). Research into VR simulation-based training suggests such technologies are valuable for cultivating humanization competencies (Jiménez-Rodríguez et al., 2021) and assessing professional moral actions (Francis et al., 2018). Authors have argued that techno-commercial motives are discordant with professional-relational foundation for care (Andersson et al., 2017); that AI technologies could fundamentally alter the way in which empathy, compassion and trust are currently regarded and practiced in healthcare (Kerasidou, 2020); and that failing to understand difficult to quantify human inputs into AI-based therapeutic decision-making processes could lead to important errors in clinical practice (Brandt et al., 2018; Kerr and Klonoff, 2019).
3.2.5. Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare (10 articles)
Studies of healthcare professionals show they value the capabilities of AI technologies for remote monitoring of patient’s physical and mental health status, and the advantages of virtual proximity for maintaining compassionate connection (Montayre, 2018; Walker et al., 2020). Although technologies are being developed for remote monitoring of older people in their own homes (Yokoo et al., 2020), little is known about clinical effectiveness or patient outcomes (Bouabida et al., 2021). Virtual proximity is also recognized as a benefit of interventions for mental health that use ecological momentary interventions (EMIS), a specific type of mobile health that enables patients to access interventions in each moment and context of daily life called a “therapist in your pocket approach” (Schick et al., 2021) such as encourage physical activity in blue-green spaces (HEART by BioAsssist) (Gallos et al., 2022). Automated topic modeling is recognized as being useful for personalized digital self-compassion interventions and overcoming barriers to traditional care (van der Lubbe et al., 2022). eHealth coaching for older people’s self-management may have benefits for addressing unmet need in mental health services (Bevilacqua et al., 2020). There is optimistic debate concerning the potential to increase access to health information and advice using widely available conversational agents (such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana) if the safety and effectiveness of these systems can be improved (Kocaballi et al., 2020a). Embodied conversational agents (ECA) (i.e., a lifelike virtual human) could have potential for engaging and motivating users for health-related learning and behavioral change (Scholten et al., 2017).
3.2.6. Calls for curricula development and healthcare professional education (5 articles)
There is a sense of urgency in this literature to teach health professionals essential digital skills and overhaul curricula (Konstantinidis et al., 2022) as well as to introduce AI technologies in educational environments in safe and effective ways that address risks and responsibilities (Combs and Combs, 2019). Such as the opportunities and implications of using standardized virtual patients (VPs) (Gavarkovs, 2019), patient clinical scenarios (Yang et al., 2022), and digital simulations (Patel et al., 2020).
3.2.7. Implementation of AI applications to enhance health and wellbeing of the healthcare workforce (2 articles)
Few studies have investigated the use of AI technologies for healthcare provider wellbeing but there does seem to be a link with compassion for staff and patients. Examples are music virtual reality for healthcare workers (Hayakawa et al., 2022) and The Provider Resilience app for mental health care providers (Wood et al., 2017).
3.3. How AI technologies enhance compassion
This section of the results presents themes relating to applications and uses of AI technologies to enhance compassion in healthcare, as reported in the literature. Ten themes are presented in order of their strength in the literature (number of articles mentioning the issues not number of technologies).
Table 4 maps the themes in this section to the working definition of compassion (the 5 elements defined in Section “1.4 Definitions and scope”). Organizing the themes in this way, Table 4 highlights the different ways that AI technologies are associated with compassion in healthcare. No articles or studies were found in this literature which map onto the compassion element, “(4) Making a judgment about the suffering (the need to act).” It is unclear why, but this lack of discussion could reflect assumptions about clinical judgment (i.e., perceived to be an objective assessment) and compassion (i.e., perceived to be a felt emotion) in healthcare. As a result of the findings in this section, and the mapping work, the wording of element five has been altered from the working definition (Engaging in a behavior with the intention to alleviate the suffering) to “Responding with an intention to alleviate the suffering.” This change marks a move away from only perceiving compassion in terms of a human behavioral response to suffering toward a broader understanding of compassion as a system as well as the possibility of an AI or human response that is not behavioral i.e., not only visible acts of caring but also digital empathetic responses, provision of health information, advice or coaching by AI technologies. An additional sixth element of the compassion construct emerged from this analysis of the literature (“Attention to the effect and outcomes of the response,” illustrated by the final row in Table 4). The implications of this additional element for the reconceptualization of compassion are discussed later (Section “3.3 Reconceptualizing compassion as a human-AI system of intelligent caring”).
3.3.1. Empathetic awareness (15 articles)
In this literature compassion and AI technologies are most strongly associated with generating empathetic awareness in humans and robots. There is good evidence that immersive VR experiences that simulate patient experiences of illness can help healthcare professionals to understand what it is like to have a specific disease or health need (Brydon et al., 2021; Demarinis, 2022), which may translate into empathetic response or relational behaviors. For example, nursing students who virtually experienced the conditions of perioperative patients through VR blended learning showed increased levels of empathy, positive attitudes toward patient safety treatment, confidence in nursing care, and improved clinical skill performance (Kim and Chun, 2022). Multiple evaluation studies into the effects of immersive simulation for dementia suggest that an VR experience can simulate a range of aspects of dementia so that students can develop empathetic understanding (Ball et al., 2015; Slater et al., 2019; Hirt and Beer, 2020; Sung et al., 2022). However, research in the US indicates that both VR and physical delivery formats of a dementia tour can be effective, but university students on healthcare courses (n = 41) reported poorer attitudes about living with advanced dementia and feeling less prepared for caregiving in both approaches (Torrence et al., 2022). Interesting research in Romania (Groza et al., 2017) and Germany (Schmidt et al., 2022) into the use of age simulation suits shows that “instant ageing” can generate more negative expectations regarding older age and reenforced stereotypes. Tele-empathy is a promising emerging field where clinicians and carers can get a sense of what the patient is experiencing physically, such as tremors in Parkinson’s disease (Palanica et al., 2019). Research on serious games for medical education (The Delirium Experience) shows certain game features, being able to “switch perspective,” can enhance medical student empathy if they play the game from the patient or nurse perspective (Buijs-Spanjers et al., 2019; Ma et al., 2021). Experiments in the Netherlands on a serious game for care workers for people with disabilities (The world of EMPA) showed participation did not enhance empathy for disabled people but it did decrease personal distress in care workers (Sterkenburg and Vacaru, 2018). In robotics, experiments in Japan (Pepper robot) (Tanioka, 2019; Tanioka et al., 2019; Tanioka et al., 2021) identify the need to develop “listening” and “gaze” together with the fidelity of responses, to mimic empathetic awareness.
3.3.2. Empathetic response and relational behavior (12 articles)
In theory, AI technologies cannot feel or express genuine empathy, hence the term empathy* has been suggested as a term to differentiate real empathy from artificial empathy (Montemayor et al., 2021). Nonetheless, empathy display and relational behavior are significant research themes in dialog systems development and robotics (Kennedy et al., 2012; Liu and Sundar, 2018; Pepito et al., 2020; Kerruish, 2021). Studies with patients have shown that most people prefer medical assistant chatbots that mimic empathy (Amini et al., 2013; Liu and Sundar, 2018; Daher et al., 2020), this is particularly true for users who are initially skeptical about machines possessing social cognitive capabilities. However, research in Korea (Yun et al., 2021) shows there is a discrepancy between expressed behavioral intentions toward medical AI and implicit attitudes (detected in brain scans) which shows people respond differently to the same conversation if it is delivered by a human doctor or medical AI. Other research has modeled an empathetic voice for healthcare robots, to show that people prefer robots that have an empathetic voice (James et al., 2021). A study of service robots for people with disabilities showed that they perceive robots as being able to stimulate and regulate emotions by mimicking cognitive and behavioral empathy, but unable to express affective and moral empathy, which they felt was essential for the feeling of “being cared for” (Kipnis et al., 2022). Analysis of human empathy toward a therapeutic zoomorphic robot (Paro) and a health care support robot (Care-O-Bot) draws attention to how the cultivation of user empathy toward robots influences patient sociality and relational interactions between human care providers (Kerruish, 2021).
3.3.3. Communication skills (12 articles)
Artificial intelligence (AI) technologies are associated with compassion through helping to improve health professional’s verbal and non-verbal communication skills (Wu et al., 2017; Guetterman et al., 2019), for example breaking bad news to a virtual human program (Guetterman et al., 2017), and communicating with suicidal virtual patients (Yao et al., 2020). Students that engaged in a 90-min simulation with a standardized patient (SP) and a language translation app (LTA iTranslate Converse) rated the teaching unit as being excellent but wanted practical training with an SP plus a simulated human translator first on how to maintain empathy in patient-physician communication mediated by LTA (Herrmann-Werner et al., 2021). Online virtual worlds (VW) (such as Second Life, Altspace, Rec Room, Google Earth VR) are rapidly becoming part of everyday life for children and adults (in 2020 Roblox had 150 million active users), and VWs have been used to improve patient-centered communication skills and student teamworking (Mitchell et al., 2011; Wu et al., 2019). A scoping review of virtual environments (VE) for clinical communication skills (Sanders et al., 2021) suggests multiple uses for enhancing clinician’s communication and empathy skills, as well as utility for communication research purposes. Evidence on effective doctor-patient communication has been applied as principles to robot-patient communication (Broadbent et al., 2018) and empathy display/facial expression (Milcent et al., 2021; Kovalchuk et al., 2022), to increase human engagement and attention in research contexts (Johanson et al., 2019).
3.3.4. Health coaching (11 articles)
There is a strong association between AI technologies (i.e., virtual coaches and health promoting chatbots) and compassion in health coaching to encourage and motivate positive health-related behavior change such as physical exercise (Kennedy et al., 2012; Bevilacqua et al., 2020), smoking cessation (He et al., 2022), weight-loss (Stein and Brooks, 2017), self-management of depression (Inkster et al., 2018), and chronic disease self-management (Hernandez, 2019). An interesting experiment with a self-compassion chatbot (Vincent) (Lee et al., 2019) revealed participation in self-compassion exercises enhanced self-compassion, particularly when participants were asked to care for the chatbot itself (versus the chatbot caring for them). In Italy a chatbot designed for older adults (Charlie) (Valtolina and Hu, 2021) can alert users to health commitments and medicines, connect remotely with doctors, family, entertain and assist elders using motivational strategies based on gamification, active notifications, and promotion of self-compassion and preventive mental healthcare. Virtual health coaches can improve self-compassion by incorporating established therapeutic methods to remodel thoughts, change behaviors and enhance relationships with self and others (Stenberg et al., 2015; Rodgers et al., 2018; Boggiss et al., 2022).
3.3.5. Therapeutic interventions (8 articles)
The literature suggests an association between AI technologies and compassion occurs through therapeutic interventions. Interesting examples are dolls and robot therapies for Alzheimer’s Disease, autism spectrum disorder, stress, or depression which can evoke different verbal, motor, and emotional reactions in patients (Márquez Sánchez et al., 2020): assistive robots for daily-care activities, health-promoting behaviors, and companionship (Law et al., 2019); VR perspective-switching to treat young people with mental health problems by switching perspective (Baghaei et al., 2019); avatar-based VR therapy to develop empathetic understanding in a therapeutic community prison in the UK (van Rijn et al., 2017); and use of an intelligent assistant for psychiatric counseling (Oh et al., 2017). In one study social cognition training for adults with autism spectrum disorder (ASD) was perceived to be useful but lacking ecological validity (authenticity to real world triggers and situations) (van Pelt et al., 2022). Immersive VR therapy has exploited the known effects of identification with a virtual body to overcome self-criticism in healthy women (Falconer et al., 2014). Another study (The SafeSpace study) co-designed and tested a VR intervention for cancer patients that incorporates relaxation and compassionate mind training to enhance feelings of wellbeing (O’Gara et al., 2022).
3.3.6. Moral development learning (8 articles)
Artificial intelligence (AI) technologies (VR applications) can support compassion through moral development learning in accordance with ethical standards (Wartman, 2019; Wartman and Combs, 2019). For example, by enhancing participant’s cultural competencies and anti-discriminatory communication practices (Roswell et al., 2020); promoting understanding of social determinants of health (social, physical, and economic conditions that impact upon health) (Gillespie et al., 2021; Brammer et al., 2022); and facilitating the safe investigation of simulated moral actions in aversive moral dilemmas (Francis et al., 2018). Interactive game-based VR immersions and VR simulations have been shown to heighten health professional’s awareness and cultural sensitivity to health equity issues (Amini et al., 2021; Hershberger et al., 2022).
3.3.7. Clinical knowledge and clinical assessment (7 articles)
The literature suggests that AI technologies can support compassion by helping health professionals to understand and respond to human suffering. Specific examples include immersive VR training on psychological symptoms of dementia (Jones et al., 2021); VR training using vignettes for Parkinson’s disease (Hess et al., 2022), and VR training for testicular disease (Saab et al., 2022). However, benefits of student engagement and perceived learning associated with immersive learning may not translate into better exam scores or clinical skills (Jacobs and Maidwell-Smith, 2022) without sufficient preparation or teaching support (Saab et al., 2022). Another emerging field is digital anatomy, which uses digital replicas of historic specimens to foster understanding and empathy through discussion of ethics, bias, and social aspects of health and disease (Osis, 2021). Student’s understanding of pain can be assessed by using facially expressive robotic patient simulators (Moosaei et al., 2017). AI technologies are also being developed to support clinical assessment. Examples include trials in Japan to develop automated health and mood assessment systems (motion sensors and human emotion detection connected via the internet of things) to assess older adults in home settings (Yokoo et al., 2020); and technology development in Canada (automated video capture and spatial-temporal analysis) to accurately predict clinical scores of parkinsonism (Sabo et al., 2022).
3.3.8. Healthcare quality assessment (6 articles)
In the literature AI technologies are associated with compassion through automated healthcare quality assessment. Specific examples are the use of natural language processing and patient’s online social media comments to capture service feedback information from diverse groups of service users (Doing-Harris et al., 2017; Rahim et al., 2021); automated analysis of patient and family feedback captured by interactive patient care technology in hospitals (Clavelle et al., 2019); a large-scale network study of online health communities to inform future policy interventions for patients’ self-care (Panzarasa et al., 2020). At the clinical level, automated evaluation of psychotherapy skills using speech and language technologies can augment experts’ capabilities in training, supervision, and quality assurance of services (Xiao et al., 2015; Flemotomos et al., 2022).
3.3.9. Therapeutic bond and therapeutic alliance (5 articles)
Artificial intelligence (AI) technologies are associated with compassion through extending or enhancing human and digital therapeutic bond and therapeutic alignment (Lindner, 2021). For example, a study of a cognitive behavioral therapy conversational agent (Woebot) demonstrated therapeutic bond scores that are comparable to traditional therapy within 5 days of initial app use (Darcy et al., 2021). Automated VR exposure therapies can improve adherence and efficacy of self-guided treatments (Miloff et al., 2020) and address challenges of asynchronous feedback in traditional care (Brandt et al., 2018). Learning from persuasive/positive technology and human-app attachment can potentially help to foster a sense of empathy, build tasks and goals, and develop bonds and digital therapeutic alliance (Tong et al., 2022). Medical AI carebots can overcome barriers to care and adherence to treatment for people who experience stigma (the concept of beneficent dehumanization of care) (Palmer and Schwan, 2022).
3.3.10. Providing health information and advice (3 articles)
Artificial intelligence (AI) technologies can support compassion by providing health information and advice but the evidence of effectiveness of specific technologies is underexplored. Commonly available conversational agents (e.g., voice commands on smartphones) are currently limited in their ability to pick up on conversational cues for health needs and effectively advise on health safety or lifestyle prompts (Kocaballi et al., 2020b). A web app can replicate cancer library functions but with limitations associated with explaining information and supportive care (Papadakos et al., 2017). Radiology patients perceived AI generated diagnosis information to be useful for confirming the doctor’s opinions and preparing for the consultation, but patients saw AI technology as having drawbacks of cyber-security, accuracy, and lack of empathy toward patients (Zhang et al., 2021).
3.3.11. Gaps in knowledge
This section of the results presents themes relating to gaps in knowledge and underexplored potential of AI technologies as described in the literature.
3.3.12. Educational effectiveness of AI-assisted learning (11 articles)
This theme in the literature reflects an undercurrent of uncertainty about the effectiveness of specific types of AI technologies in health professional education contexts (Jones et al., 2021; Sukhera and Poleksic, 2021) as well as the possible negligible benefit (Navarrete et al., 2021) or loss of benefits associated with replacing existing educational methods with technologies [such as the benefits of involving real patients in teaching as described by Abeyaratne et al. (2020)]. The issue is not that technologies cannot generate empathy in some groups of learners, but that empathy might not translate into longer-term prosocial caring behaviors in healthcare systems (Gillespie et al., 2021; Beverly et al., 2022). For example, VR dementia training may not benefit all learners and VR may differentially assist leaners of different ages and English-speaking backgrounds, suggesting that more research is needed to understand for which variables and for whom VR is a useful teaching tool (Jütten et al., 2018; Stargatt et al., 2021). VR provides a small snapshot of the vicissitudes of living with an illness or disability that might leave a false impression of what patients “like that” feel (Dean et al., 2020). It could be that other types of technologies, less standardized (more complex and diverse) virtual patients (Shorey et al., 2019), or digital anatomy could inform professional training and enhance student learning or empathy more effectively (Osis, 2021), but this is unknown. Learning technologies that have “point-of-view” functions may enable students to see issues from different perspectives (Levett-Jones et al., 2017) and diverse service users’ experiences (Riches et al., 2022) which could benefit caring relationships.
3.3.13. Patient diversity and AI technologies (10 articles)
There are significant gaps in understanding about how patient diversity relates to AI technologies and compassion. These gaps relate to “high” and “low” users of technologies (Inkster et al., 2018); differences in acceptability of technologies e.g., service robots for healthcare (Giambattista et al., 2016), psychological evaluation (Rossi et al., 2020) or self-management technologies (Mirkovic et al., 2018); language and communication style preferences (Herrmann-Werner et al., 2021; Boggiss et al., 2022; Eagle et al., 2022). Race-concordance has emerged as an important factor in the design and use of virtual patients and virtual clinicians, but the implications for teaching and practice are unclear and underexplored (Halan et al., 2015; Krieger et al., 2021). For example, in one design study black men (n = 25) designed a Black male virtual clinician (VC) that was named Agent Leveraging Empathy for eXams (ALEX) and referred to as “brother-doctor”; participants wanted to interact with ALEX over their regular doctor (Wilson-Howard et al., 2021). While automated services could extend access to psychological support, research into digital therapeutic alliance is needed to ensure AI technologies work for diverse patient groups (Scholten et al., 2017; Grekin et al., 2019; Tong et al., 2022). The first therapeutic alliance instrument developed for use with embodied virtual therapists is the Virtual Therapist Alliance Scale (VTAS): preliminary assessments suggest that alliance toward a virtual therapist is a significant predictor of treatment outcome (Miloff et al., 2020). Patient diversity also needs to be considered in relation to equipping virtual agents with more human-centric prosocial rule breaking, which is a common beneficial feature of human ethical decision-making behavior that is difficult to mimic in AI technologies (Ramanayake et al., 2022); as well as to support patient’s “social convoy” (Portz et al., 2020) (i.e., family members, friends, neighbors, formal caregiving supports) to facilitate appropriate involvement and information sharing.
3.3.14. Implementation of AI technologies in education and practice settings (8 articles)
It is currently unclear how the implementation of AI technologies might affect compassion in different contexts of healthcare (Verma et al., 2021), such as medical imaging (Bleiker et al., 2020) or intensive care (Price, 2013). Little is known about how AI technologies and compassion might relate to service efficiency or patient care (Kocaballi et al., 2020a); or public perceptions of AI capabilities (Chew and Achananuparp, 2022). Future research is needed to explore the role and implementation of VR for enhancing empathy in various real-world contexts, and the mediating role of individual differences in use of AI-driven interventions (Louie et al., 2018; Nisha et al., 2019). Implementation of AI technologies in healthcare systems requires development and implementation of new curricula and new approaches to teach students how to interact with AI technologies, learn within interactive learning environments, and manage Al systems (Srivastava and Waghmare, 2020).
3.3.15. Safety and clinical effectiveness of AI technologies (4 articles)
The effectiveness of VR based “switching perspective” technologies (encouraging a self-compassionate lens) for early intervention for mental health issues is promising but research is needed to explore safety and privacy issues in real-world contexts (Baghaei et al., 2021). Further research into general conversational agents is needed to establish guidelines for designing safe and effective response structures for different prompt types (Kocaballi et al., 2020b). The potential capabilities and risks of active assistance technologies is underexplored and there is a need to consider informatics methods and algorithms more fully for safety and ethical reasons (Kennedy et al., 2012). It is unclear how to maintain the initial benefits and permanence of behavior change produced by short-term virtual health coaching interventions (Bevilacqua et al., 2020) and this needs further research to attain lasting clinical benefits.
3.4. Key areas for development
3.4.1. Enriching education, learning, and clinical practice (10 articles)
Findings in the literature suggests there is great potential for AI technologies to enhance underexplored elements of compassion by enriching education, learning and clinical practice (Sukhera and Poleksic, 2021; Saab et al., 2022). There appears to be an “engagement factor” (Navarrete et al., 2021) associated with immersive VR environments which could be further explored for student engagement and empathy awareness as well as other elements of compassion, such as making a judgment about the suffering (the need to act). Understanding suffering could be enhanced by using immersive technologies in combination with new types of haptic technologies (technologies that create an experience of touch by applying forces, vibrations, or motions to the user) (Ling et al., 2020) or existing tele-empathy applications (Ho et al., 2017; Palanica et al., 2018). Learning from self-compassion apps about identification with a virtual body (Falconer et al., 2014) could be integrated into immersive VR interventions to enhance clinical knowledge and clinical assessment skills in order to better understand suffering associated with the body (Plotzky et al., 2021). It could be useful to take learning from co-designed virtual health coaching apps into educational applications (Atif et al., 2022). There is potential to use VR technologies with clinical simulations and virtual patients to enhance approaches to moral development learning and ethical clinical decision making (Francis et al., 2018). Future research could explore the notion of beneficent dehumanization of care (e.g., to overcome stigma, stereotyping, negative emotions, or regret) and the implications for professional training and education (Palmer and Schwan, 2022). There is a clear need for research and education on AI technologies in relation to global humanitarian health analysis and responses (Fernandez-Luque and Imran, 2018) which could include development of virtual health tours for different groups of health professionals to teach about health issues in different countries and regions.
3.4.2. Extending healing spaces (9 articles)
Virtual and immersive spaces may have additional benefits for patients, health professionals, and students with respect to health and wellbeing outcomes that are not yet known (Gavarkovs, 2019), such as stress reduction (Michael et al., 2019). There is potential to integrate AI technologies to deliver combined physical health and wellbeing interventions for more effective mind-body interventions for patients and healthcare professionals (Rosa, 2014; Michael et al., 2019; Zheng et al., 2022). Such AI-assisted healing spaces could be devised to be individual (e.g., immersive VR) or shared virtual restorative spaces (e.g., making use of virtual worlds) drawing on known effective interventions for wellbeing. Co-therapy approaches, where community peers use avatars to share health information (Atif et al., 2022) have the potential to take clinics into communities, especially in resource-poor settings. Research on the internet of things (IoT) (Tiersen et al., 2021) opens new possibilities and challenges for seeing people’s homes as clinical spaces (Kelly et al., 2020; Bouabida et al., 2021).
3.4.3. Enhancing healing relationships (7 articles)
According to the literature, AI technologies could support compassion by enhancing healing relationships. For example, by exploring and developing bonds between humans and technologies could boost engagement and efficacy of digital therapeutics (Darcy et al., 2021). It could be useful to explore further how therapeutic relationships are affected by virtual characters that exhibit certain perceived qualities such as gender (García et al., 2003) or ethnicity (Marcoux et al., 2021), to inform virtual health coach systems (Bevilacqua et al., 2020). Further research into traits and behaviors such as humor, self-disclosure, facial expressions, eye gaze, body posture, and gestures (Johanson et al., 2021) could inform effective human-robot interaction and human-human interactions in healthcare (Liu and Sundar, 2018). Cross-cultural research could inform ongoing development (in New Zealand) of an autonomous empathy system of a digital human to understand the challenges and opportunities for empathetic interactions (Loveys et al., 2022).
4. Discussion
4.1. Contribution of this review
The core contribution of this review is to demonstrate the association between AI technologies and compassion in healthcare and to elaborate on the nature and complexities of this association. Specifically, the review (1) shows the ways that AI technologies are currently being debated, developed, and used to enhance compassion in healthcare systems, so that these areas might be explored in more depth in the future (2) reconceptualizes compassion as a human-AI system of intelligent caring comprising six elements. These new understandings are theoretically informed, derived from an established scoping review methodology (Arksey and O’Malley, 2005; Peters et al., 2020) and a systematic process of data extraction and thematic analysis (Mays et al., 2001). A multidisciplinary team interrogated the themes and interpreted the findings for research and practice. Future development work using deliberative methods could test the validity of the findings with interdisciplinary cohorts of health professionals, educators, students, technologists, patients, and researchers, for example, to explore the themes that have been identified; and to debate priorities for future research and practice. The present review has developed and provided a set of search terms, and captured baseline data, which means the exercise could be repeated in a year or two to investigate any developments in this emerging topic area.
4.2. Limitations
As this scoping review only includes articles published in English it is biased toward westernized perspectives of healthcare and compassion. It does not consider alternative cultural understandings or ways of perceiving compassion, for example, the African philosophy of ubuntu or the Buddhist maitrî (aka mettâ). The literature and perspective firmly focuses on compassion for people, rather than alternative understandings of compassion for sentient beings, or the environment, which are increasingly relevant to health and healthcare services. The review did not examine whether specific AI technologies, or their use in particular interventions or contexts, are effective, usable, and adoptable. It did not use statistical tests, or percent values about adoption and/or use of different kinds of technological practices or tools and satisfaction/dissatisfaction about them or any other type of outcomes. The review does not draw on learning from other fields (e.g., AI in military ethics, automated vehicle ethics, computer generated imagery or the film industry, business hybrid systems of online/offline communication, medical crowdsourcing etc.). Identified key areas for potential are biased toward present use cases in healthcare, and biases toward applications in elderly care, dementia, and finding AI-driven solutions to an aging population. Issues relating to young people, minority patient groups, people who suffer health inequalities (Scambler, 2012) or barriers to healthcare (Powell et al., 2016), are likely to be underrepresented in the results.
4.3. Reconceptualizing compassion as a human-AI system of intelligent caring
It is challenging to think of compassion as a system rather than a feeling or experience, yet a systems perspective is where the meaning and value of the concept lies: offering possibilities to align and integrate motives and motivation for intelligent caring behavior in humans, AI technologies, and healthcare systems (Lansing, 2003; Levin, 2003). Compassion, in this reconceptualization, is not about managing professional virtues or mimicking emotions (Pedersen and Roelsgaard Obling, 2019) it is about combining human and AI capabilities in an integrated system of intelligent caring.
Reconceptualizing compassion as a human-AI system of intelligent caring connects thinking about compassion at the individual human level (i.e., human psychology and behavior), with compassion as a function of AI technologies (e.g., artificial empathy, artificial compassion, HCD and technology design practices), and compassion as an essential aspect of healthcare system effectiveness and human flourishing. This conceptualization allows compassion, as it is perceived and manifest in everyday healthcare practice to connect with highly technical discourses about the use of AI technologies in healthcare systems, and human-machine boundaries (De Togni et al., 2021). It offers clear elements to explore how together humans and AI technologies might become more intelligent and caring.
As noted in the findings on how AI technologies are being used to enhance compassion in healthcare (Section “2.9 How AI technologies enhance compassion”) a new sixth element of compassion emerged from the analysis of the literature (as shown in the final row of Table 4). That is, “(6) Attention to the effect and outcomes of the response.” This new element that has been identified, corresponds with previous compassion research which suggests that experiencing or witnessing helpful human interactions is an important mechanism for developing understandings about compassion (Walter et al., 2015). This additional sixth element appears to be necessary to complete a feedback cycle, so that the person, the AI technology, or the healthcare system, is aware suffering has been alleviated or not, thereby creating future motivation (Ford, 1992). This sixth element corresponds with the importance of learning over time e.g., through training or performance feedback, learning from examples of excellence as well as learning from failings in healthcare.
Figure 2 draws together the elements developed from the original working definition of compassion (Section “1.4 Definitions and scope”) and informed by the review findings, to illustrate how compassion may be reconceptualized as a system of at least six interrelated component elements, which may or may not interconnect in any individual, organization, or system level to form a cyclical feedback system. Although the components are numbered 1–6, they may exist in different health systems, areas of practice, or health professional’s behaviors simultaneously. Alternatively, some or all elements may absent or underdeveloped. For example, a person may have very good empathetic awareness, but this may not translate into a decision to act or a response with the intention to alleviate suffering. At the macro level, it could be that a healthcare organization may generate responses with the intention to alleviate suffering, yet fail to connect with the suffering, meaning that patients are not consulted, are unaware, or do not feel involved in decisions about the type of response.
This reconceptualization has six component elements of compassion that are numbered 1–6 for clarity, but they are not necessarily sequential. Elements can be learnt and enhanced by individual humans and some AI technologies that have appropriate programming features (Mason, 2015, 2023). At a higher macro level, these elements of a compassion system can be developed by whole healthcare organizations, or across healthcare systems with strategic and supportive interventions. For example, healthcare professionals in training can learn to develop empathetic awareness but also to understand suffering in context, such as the provision of high-quality healthcare to address health inequalities and promote health equity. This reconceptualization of compassion, as a system, acknowledges the ethical challenges of artificial empathy, unease about virtual human’s mimicry, deception, and moral incongruence (Montemayor et al., 2021), and asserts a way forward through the authentic empathy debate. It does this by showing that it is possible for humans and AI to collectively promote collective good (Day et al., 2021). The nature of the system, through which this is made possible, is encapsulated by the six component elements of compassion. AI technologies can contribute to each, or all, of these different elements of compassion. Thus, this understanding offers a much more significant and useful contribution of AI technologies to compassionate patient care and healthcare systems compared to concerns about replacing human empathy with digital empathy.
We suggest that when all six elements of compassion are present, functioning well, and interconnected, compassion is an intelligent caring system. Compassion involves intelligence in the sense that it is a learning system that is responsive and adjusts to new information and feedback. This is because the sixth element provides feedback (to the individual, the organization, or the system) about whether compassion has occurred, and suffering has been alleviated. For example, to tell humans when to step back with AI technologies and the circumstances when a human empathetic response is what a person wants and needs to lessen their suffering (e.g., breaking bad news, end of life care, apologizing for failings in care) (Elkin, 2021).
The review shows that concerns in the literature regarding AI technologies center on the issue of whether AI technologies are fundamentally about the replication of humans (Section 3.2). The review highlights that there are a range of AI technologies in development that aim to replicate human bodies, voices, and mannerisms (e.g., affective computing, robotics, embodied virtual agents), or to imitate human relationships (digital bonds, digital therapeutic relationships etc.), or to reproduce human capabilities (e.g., job roles, skill sets, knowledge, abilities). Aiming to replicate humans and human relationships could be problematic in the longterm, not only for technical reasons or the authenticity and artificial empathy. From a sociological point of view, replication risks reformation of harmful or unfair social structures (e.g., power dynamics, status, capital, agency) (Parsons, 1982) in new forms of deceptive relationships, based on artificial emotions (Montemayor et al., 2021). However, the results on how AI technologies are being used (Section “3.3 Reconceptualizing compassion as a human-AI system of intelligent caring”) show a more positive and transformative ambitions here, in the form of innovative applications and studies of AI technologies that seek to “augment,” “enrich,” and “enhance” human lives, not to replace them.
Current applications in education and practice are providing engaging learning experiences; supporting human-human healing relationships; as well as providing some effective interventions for health and wellbeing such as therapeutic counseling. These applications are doing this in unique and original ways that are made possible through AI e.g., immersion, VR, perspective switch, avatars, simulation suits. Rather than evaluating the extent to which a given technology has successfully replicated human capacities or designing technologies according to understandings of the human body or mind, future research might instead seek to transform rather than replicate pre-existing human or societal systems with their biases, faults, and limitations. The potential of AI technologies is not so much the simulation of human intelligence and care-giving but rather an expansion of possibilities through which to realize these human capacities.
This review has identified some clear areas to explore new and novel approaches to human-AI intelligent caring. There are opportunities for innovation (and possible commercial opportunities) to build on and develop (1) better human-AI systems for detecting suffering (e.g., pain, distress, risk, and discrimination) to fine tune AI/human empathetic awareness; (2) use of human-AI intelligence to understand suffering in context; (3) better human-AI verbal and non-verbal communication systems to connect with suffering; (4) human-AI intelligence to inform decisions about the need to act, (5) more authentic and sustainable forms of human-AI empathetic response and interventions; (6) better human-AI intelligence about the effects and outcomes of responses and whether they have alleviated suffering or need to be modified.
In terms of motivation for a human-AI intelligent caring system, it is evident from this literature that AI technologies are helping humans to develop empathetic understanding of human experiences of living with debilitating conditions (Groza et al., 2017; Palanica et al., 2018; Schmidt et al., 2022). Next steps could be to build evidence about how AI technologies might support new ways of connecting (e.g., verbal, physical, signs and symbols) with various forms of suffering (e.g., pain, distress, risk, disadvantage); enable virtual/real proximity such as safe relating (Gilbert, 2021); explore therapeutic alignment (patient preferences for human and virtual providers); or address stigma (e.g., beneficent dehumanization of care). Motivation can be found in examples of AI technologies that are alleviating human suffering; specific use cases identified by this review could be considered humanitarian or miraculous in their effects, such as helping paralyzed people to walk (Sarkar et al., 2020). However, there are more ordinary applications that are nonetheless useful and can build compassion, such as providing the right information to patients at the right time to alleviate distress (Papadakos et al., 2017).
What these findings also highlight is that people are inspired to help each other to help themselves, through the new capabilities of AI technologies. This is evidenced by the development of numerous self-care and self-compassion technologies. There are also entrepreneurial motives (Shepherd and Majchrzak, 2022) which need to be considered in relation to a system of human-AI intelligent caring. Future research could explore the themes of healing spaces and healing relationships to boost self-compassion and self-care in patients and health professionals. There is a need to direct more attention to not only the theory of compassion, but how to use AI technologies to help close the compassion cycle: in other words, how AI technologies can be an important tool for informing and assuring healthcare quality at multiple levels, from individual practitioner, AI technologies, healthcare organizations, to whole healthcare systems–for seeing when responses have made a positive difference to people’s lives or provided motivation to continue to care.
This reconceptualization of compassion aligns with calls to develop compassion as a healthcare system goal and professional development priority (Gray and Cox, 2015; Swendiman et al., 2019). It helps to connect the subjective, experiential, and practical dimensions of compassion (e.g., getting people home safely, organizing transport, “going the extra mile”) with an understanding of how AI technologies might support societal forms of caring (e.g., protecting human rights, advancing health equality) through their individual design and combined effects (Day et al., 2021). A systems perspective of compassion proposes that not everyone or every AI application needs to be delivering empathetic responses in a healthcare system all the time (Bleiker et al., 2020). This is not to say that healthcare professionals should not aim to be “highly humanistic” in their practice (Swendiman et al., 2019), but rather that the system of compassion in healthcare extends beyond human interactions at the individual level. Therefore, compassion can relate not only to direct clinical treatments and patient care but to indirect actions such as the development of AI-driven organizational systems for patient feedback, the use of guidance for use of AI technologies, professional codes of practice for the use of AI technologies, and so on; to employ human health professionals and use AI technologies to best effect within an overall system of intelligent caring.
4.4. Implications
In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined-up thinking about human-AI intelligent caring. Research funders internationally in different areas of health, education, and technology research can encourage research and development into the topic of AI technologies and compassion. Interdisciplinary empirical research is needed to explore issues about the educational effectiveness of AI-assisted learning; patient diversity and AI technologies; safety and clinical effectiveness of AI technologies. Theoretically informed research should take a longterm view of how AI technologies can enhance compassion by enriching education, learning and clinical practice; extending healing spaces; and enhancing healing relationships. Educators in computing, design sciences, health professional education, and other fields and disciplines of science and humanities, can inform themselves about the association between AI technologies and compassion and promote an understanding of compassion as a human-AI system of intelligent caring involving six elements (see Figure 2). Educators can make use of modern learning technologies to enhance learning engagement, student empathetic awareness, to learn about how to respond to different types of suffering (e.g., pain, distress, risk, and disadvantage), communication skills and teamworking. Technologists and computer scientists should be aware that compassion is important and beneficial to human health as well as the sustainability of healthcare systems. They can consider how, in some applications it could be useful to build in artificial empathy (Dial, 2018), or artificial compassion (Mason, 2015, 2023), while in other contexts AI technologies can contribute to specific elements of compassion within healthcare/social systems (e.g., supporting sensitivity to suffering). Health professionals can link into interprofessional virtual communities of practice (McLoughlin et al., 2018) to learn and share knowledge of how AI technologies might support compassion in healthcare and to develop the practice of human-AI intelligent caring.
5. Conclusion
This systematic scoping review of the literature shows there is an association between AI technologies and compassion in healthcare. Interest in this association has grown internationally over the last decade, with more articles debating the topic and reporting on developments each year. In a range of healthcare contexts, AI technologies are being used to develop or enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; as well as to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements. Future research and development into the association between AI technologies and compassion could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships in new and novel ways, made possible by artificial intelligence.
Author contributions
EM and FR initiated the review and collaborated with CM, TZ, KP, RS, and MR to plan and develop the review focus and approach. EM and TZ undertook the searches. EM wrote the draft. All authors contributed to discussions online (Sept 2021–Jan 2022) and thereafter contributed to conceptualization, thematic analysis, interpretation of the results, and approved the submitted version and commented on the final draft.
Acknowledgments
We thank AIMed and The Henley Forum for welcoming EM as a guest to discussions relating to AI technologies and digital transformation. Patricia Grocott, Professor of Nursing Technology and Innovation, King’s College London, provided helpful comments and advice about a previous draft of this article. We also thank the reviewers of this article for their time and expertise.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.971044/full#supplementary-material
Footnotes
References
Abdullah, Y. I., Schuman, J. S., Shabsigh, R., Caplan, A., and Al-Aswad, L. A. (2021). Ethics of artificial intelligence in medicine and ophthalmology. Asia Pac. J. Ophthalmol. 10, 289–298. doi: 10.1097/APO.0000000000000397
Abeyaratne, C., Bell, J. S., Dean, L., White, P., and Maher-Sturgess, S. L. (2020). Engaging older people as university-based instructors: a model to improve the empathy and attitudes of pharmacists in training. Curr. Pharm. Teach. Learn. 12, 58–64. doi: 10.1016/j.cptl.2019.10.011
Ajeesh, A. K., and Rukmini, S. (2022). Posthuman perception of artificial intelligence in science fiction: an exploration of kazuo ishiguro’s klara and the sun. AI Soc. 22:1533. doi: 10.1007/s00146-022-01533-9
Alameddine, M., Soueidan, H., Makki, M., Tamim, H., and Hitti, E. (2019). The use of smart devices by care providers in emergency departments: cross-sectional survey design. JMIR Mhealth Uhealth. 7:e13614. doi: 10.2196/13614
Ali, S., and Terry, L. (2017). Exploring senior nurses’ understanding of compassionate leadership in the community. Br. J. Commun. Nurs. 22, 77–87. doi: 10.12968/bjcn.2017.22.2.77
Ali, S., Kleib, M., Paul, P., Petrovskaya, O., and Kennedy, M. (2022). Compassionate nursing care and the use of digital health technologies: a scoping review. Int. J. Nurs. Stud. 127:104161. doi: 10.1016/j.ijnurstu.2021.104161
Alrassi, J., Katsufrakis, P. J., and Chandran, L. (2021). Technology can augment, but not replace, critical human skills needed for patient care. Acad. Med. 96, 37–43. doi: 10.1097/ACM.0000000000003733
Amini, H., Gregory, M. E., Abrams, M. A., Luna, J., Roland, M., Sova, L. N., et al. (2021). Feasibility and usability study of a pilot immersive virtual reality-based empathy training for dental providers. J. Dent. Educ. 85, 856–865. doi: 10.1002/jdd.12566
Amini, R., Lisetti, C. L., Yasavur, U., and Rishe, N. (2013). “On-demand virtual health counselor for delivering behavior-change health interventions,” in Proceeding of the 2013 IEEE international conference on healthcare informatics, 46–55. doi: 10.1109/ICHI.2013.13
Andersson, M., Axelsson, K., Fältholm, Y., and Lindberg, I. (2017). Technologies in older people’s care. Nurs. Ethics 24, 125–137. doi: 10.1177/0969733015594665
Arksey, H., and O’Malley, L. (2005). Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol 8, 19–32. doi: 10.1080/1364557032000119616
Atif, N., Nazir, H., Sultan, Z. H., Rauf, R., Waqas, A., Malik, A., et al. (2022). Technology-assisted peer therapy: a new way of delivering evidence-based psychological interventions. BMC Health Serv Res. 22:842. doi: 10.1186/s12913-022-08233-6
Baghaei, N., Ahmadi, A., Khaliq, I., and Liang, H. (2021). “Individualised virtual reality for supporting depression: feedback from mental health professionals,” in Proceeding of the 2021 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-adjunct), 63–67. doi: 10.1109/ISMAR-Adjunct54149.2021.00022
Baghaei, N., Hach, S., Khaliq, I., Stemmet, L., Krishnan, J., Naslund, J., et al. (2019). “Increasing self-compassion in young people through virtual reality,” in Proceeding of the 2019 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-adjunct), (IEEE), doi: 10.1109/ISMAR-Adjunct.2019.00042
Baguley, S. I., Pavlova, A., and Consedine, N. S. (2022). More than a feeling? What does compassion in healthcare ‘looks like’ to patients? Health Exp. 25, 1691–1702. doi: 10.1111/hex.13512
Baguley, S., Dev, V., Fernando, A., and Consedine, N. (2020). How do health professionals maintain compassion over time? Insights from a study of compassion in health. Front. Psychol. 29:564554. doi: 10.3389/fpsyg.2020.564554
Bajwa, J., Munir, U., Nori, A., and Williams, B. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Fut. Healthc J. 8, e188–e194. doi: 10.7861/fhj.2021-0095
Ball, S., Bluteau, P., Clouder, D. L., Adefila, A., and Graham, S. (2015). “MyShoes: an immersive simulation of dementia,” in Proceedings of the international conference on e-learning, Vol. 2015, (Academic Conferences Limited.), 16–23.
Baqeas, M. H., Davis, J., and Copnell, B. (2021). Compassion fatigue and compassion satisfaction among palliative care health providers: a scoping review. BMC Palliat Care. 20:88. doi: 10.1186/s12904-021-00784-5.
Beverly, E., Rigot, B., Love, C., and Love, M. (2022). Perspectives of 360-degree cinematic virtual reality: interview study among health care professionals. JMIR Med. Educ. 8:e32657. doi: 10.2196/32657
Bevilacqua, R., Casaccia, S., Cortellessa, G., Astell, A., Lattanzio, F., Corsonello, A., et al. (2020). Coaching through technology: a systematic review into efficacy and effectiveness for the ageing population. Int. J. Environ. Res. Public Health 2020:930. doi: 10.3390/ijerph17165930
Bin Kamarudin, M. F., and Zary, N. (2019). Augmented reality, virtual reality and mixed reality in medical education: a comparative web of science scoping review. Preprints 2019:2019040323. doi: 10.20944/preprints201904.0323.v1
Bjorklund, P. (2016). Gossamer threads: commentary on the impact of digital technology on the developing brain and the capacity for empathy. ANS Adv. Nurs. Sci. 39, 71–84. doi: 10.1097/ANS.0000000000000105
Blease, C. R., Kaptchuk, T. J., Bernstein, M. H., Mandl, K. D., Halamka, J. D., and DesRoches, C. M. (2019). Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J. Med. Int. Res. 21:e12802. doi: 10.2196/12802
Blease, C., Locher, C., Leon-Carlyle, M., and Doraiswamy, M. (2020). Artificial intelligence and the future of psychiatry: qualitative findings from a global physician survey. Digit Health. 27:2055207620968355. doi: 10.1177/2055207620968355
Bleazard, M. (2020). Compassion fatigue in nurses caring for medically complex children. J. Hosp. Palliat. Nurs. 22, 473–478. doi: 10.1097/NJH.0000000000000688
Bleiker, J., Knapp, K., Morgan-Trimmer, S., and Hopkins, S. (2020). What medical imaging professionals talk about when they talk about compassion. J. Med. Imag. Radiat. Sci. 51, S44–S52. doi: 10.1016/j.jmir.2020.08.009
Blomberg, K., Griffiths, P., Wengström, Y., May, C., and Bridges, J. (2016). Interventions for compassionate nursing care: a systematic review. Int. J. Nurs. Stud. 62, 137–155. doi: 10.1016/j.ijnurstu.2016.07.009
Boggiss, A., Consedine, N., Hopkins, S., Silvester, C., Jefferies, C., Hofman, P., et al. (2022). A self-compassion chatbot to improve the wellbeing of adolescents with type 1 diabetes during the COVID-19 pandemic: what do adolescents and their healthcare professionals want? JMIR Preprints 2022:40641. doi: 10.2196/preprints.40641
Bogossian, F., Winters-Chang, P., and Tuckett, A. (2014). “The pure hard slog that nursing is”: a qualitative analysis of nursing work. J. Nurs. Scholarship Offi. Publi. Sigma Theta Tau Int. Honor Soc. Nurs. 46, 377–388. doi: 10.1111/jnu.12090
Bond, C. (2002). Positive touch and massage in the neonatal unit: a british approach. Sem. Neonatol. 7, 477–486. doi: 10.1053/siny.2002.0149
Bouabida, K., Malas, K., Talbot, A., Desrosiers, M. È, Lavoie, F., Lebouché, B., et al. (2021). Remote patient monitoring program for COVID-19 patients following hospital discharge: a cross-sectional study. Front. Digit. Health 3:721044. doi: 10.3389/fdgth.2021.721044
Boucher, E. M., Harake, N., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J. D., et al. (2021). Artificially intelligent chatbots in digital mental health interventions: a review. Exp. Rev. Med. Dev. 18, 37–49. doi: 10.1080/17434440.2021.2013200
Braillon, A., and Taiebi, F. (2020). Practicing “reflective listening” is a mandatory prerequisite for empathy. Patient Educ. Coun. 103, 1866–1867. doi: 10.1016/j.pec.2020.03.024
Brammer, S., Regan, S., Collins, C., and Gillespie, G. (2022). Developing innovative virtual reality simulations to increase health care providers’ understanding of social determinants of health. J. Contin. Educ. Health Prof. 42, 60–65. doi: 10.1097/CEH.0000000000000400
Brandt, C. J., Søgaard, G. I., Clemensen, J., Søndergaard, J., and Nielsen, J. B. (2018). Determinants of successful ehealth coaching for consumer lifestyle changes: qualitative interview study among health care professionals. J. Med. Int. Res. 20:e237. doi: 10.2196/jmir.9791
Braveman, P., and Gottlieb, L. (2014). The social determinants of health: it’s time to consider the causes of the causes. Public Health Rep. 129, 19–31. doi: 10.1177/00333549141291S206
Bridge, P., and Bridge, R. (2019). Artificial Intelligence in radiotherapy: a philosophical perspective. J. Med. Imag. Radiat. Sci. 50, S27–S31. doi: 10.1016/j.jmir.2019.09.003
Broadbent, E., Johanson, D., and Shah, J. (2018). “A new model to enhance robot-patient communication: applying insights from the medical world,” in Social robotics - 10th international conference, ICSR 2018, qingdao, china, eds S. G. Shuzhi, J. J. Cabibihan, M. A. Salichs, E. Broadbent, H. He, and A. R. Wagner. doi: 10.1007/978-3-030-05204-1_30
Brown, L., Houston, E., Amonoo, H., and Bryant, C. (2020). Is self-compassion associated with sleep quality? A meta-analysis. Mindfulness 12, 82–91. doi: 10.1007/s12671-020-01498-0
Brown, S. L., and Brown, R. M. (2015). Connecting prosocial behavior to improved physical health: contributions from the neurobiology of parenting. Neurosci. Biobehav. Rev. 55, 1–17. doi: 10.1016/j.neubiorev.2015.04.004
Brydon, M., Kimber, J., Sponagle, M., MacLaine, J., Avery, J., Pyke, L., et al. (2021). Virtual reality as a tool for eliciting empathetic behaviour in carers: an integrative review. J. Med. Imag. Radiat. Sci. 52, 466–477. doi: 10.1016/j.jmir.2021.04.005
Buchanan, C., Howitt, M. L., Wilson, R., Booth, R. G., Risling, T., and Bamford, M. (2020). Predicted influences of artificial intelligence on the domains of nursing: scoping review. JMIR Nurs. 3:e23939. doi: 10.2196/23939
Buijs-Spanjers, K. R., Hegge, H. H. M., Cnossen, F., Hoogendoorn, E., Jaarsma, D. A. D. C., and de Rooij, S. E. (2019). Dark play of serious games: effectiveness and features (G4HE2018). Games Health J. 8, 301–306. doi: 10.1089/g4h.2018.0126
Buolamwini, J. (2017). Gender shades: intersectional phenotypic and demographic evaluation of face datasets and gender classifiers Ph. D, Thesis.
Casagrande, A. (2016). La bientraitance, naissance d’une notion dans le clair-obscur des espaces de soins [compassionate care, emergence of a notion in the light and shade of the care environment]. Soins Revue Reference Infirmiere 805, 22–25. doi: 10.1016/j.soin.2016.03.004
Chew, H. S. J., and Achananuparp, P. (2022). Perceptions and needs of artificial intelligence in health care to increase adoption: scoping review. J. Med. Int. Res. 24:e32939. doi: 10.2196/32939
Clavelle, J. T., Sweeney, C. D., Swartwout, E., Lefton, C., and Guney, S. (2019). Leveraging technology to sustain extraordinary care: a qualitative analysis of meaningful nurse recognition. J. Nurs. Administr. 49, 303–309. doi: 10.1097/NNA.0000000000000757
Collier, R. (2017). Electronic health records contributing to physician burnout. CMAJ Can. Med. Assoc. J. Assoc. Med. Can. 189, E1405–E1406. doi: 10.1503/cmaj.109-5522
Combs, C. D., and Combs, P. F. (2019). Emerging roles of virtual patients in the age of AI. AMA J. Ethics 21, E153–E159. doi: 10.1001/amajethics.2019.153
Critchley, S. (2015). Beyond artificial compassion. Available online at: http://radar.oreilly.com/2015/02/beyond-ai-artificial-compassion.html (accessed December 14, 2022).
Crotty, B. H., and Somai, M. (2022). Bugs in the virtual clinic: confronting telemedicine’s challenges through empathy and support. J. Particip. Med. 14:e25688. doi: 10.2196/25688
Daher, K., Casas, J., Abou Khaled, O., and Mugellini, E. (2020). “Empathic chatbot response for medical assistance. assoc comp machinery,” in Proceedings of the 20th ACM international conference on intelligent virtual agents (ACM IVA 2020), doi: 10.1145/3383652.3423864
Darcy, A., Daniels, J., Salinger, D., Wicks, P., and Robinson, A. (2021). Evidence of human-level bonds established with a digital conversational agent: cross-sectional, retrospective observational study. JMIR Format. Res. 5:e27868. doi: 10.2196/27868
Davendralingam, N., Kanagaratnam, M., and Davagnanam, I. (2017). ‘A crisis in caring’: a place for compassionate care in today’s medicine. J. R. Soc. Med. 110, 225–226. doi: 10.1177/0141076817703891
Davenport, T., and Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Fut. Health. J. 6, 94–98. doi: 10.7861/futurehosp.6-2-94
Day, J., Finkelstein, J. C, Field, B. A, Matthews, B., Kirby, J. N, and Doty, J. R (2021). Compassion-focused technologies: reflections and future directions. Front. Psychol. 12:603618. doi: 10.3389/fpsyg.2021.603618
De Bakey, M. (2018). Medical research and the golden rule. JAMA 319:726. doi: 10.1001/jama.2017.12248
De Togni, G., Erikainen, S., Chan, S., and Cunningham-Burley, S. (2021). What makes AI ‘intelligent’ and ‘caring’? Exploring affect and relationality across three sites of intelligence and care. Soc. Sci. Med. 277:113874. doi: 10.1016/j.socscimed.2021.113874
de Zulueta, P. (2021). Confidentiality, privacy, and general practice: GPDPR and the brave new world of ‘big data’. Br. J. General Pract. J. R. Coll. General Practit. 71, 420–421. doi: 10.3399/bjgp21X717017
de Zulueta, P. C. (2015). Developing compassionate leadership in health care: an integrative review. J. Health. Lead. 8, 1–10. doi: 10.2147/JHL.S93724
Dean, S., Halpern, J., McAllister, M., and Lazenby, M. (2020). Nursing education, virtual reality and empathy? Nurs. Open. 7, 2056–2059. doi: 10.1002/nop2.551
Demarinis, S. (2022). Healthcare providers use virtual reality to elicit empathy. Exp. J. Sci. Healing 18:1550.
Dev, V., Fernando, A. T. III, Kirby, J. N., and Consedine, N. S. (2019). Variation in the barriers to compassion across healthcare training and disciplines: a cross-sectional study of doctors, nurses, and medical students. Int. J. Nurs. Stud. 90, 1–10. doi: 10.1016/j.ijnurstu.2018.09.015
Dewar, B., and Cook, F. (2014). Developing compassion through a relationship centred appreciative leadership programme. Nurse Educ. Today 34, 1258–1264. doi: 10.1016/j.nedt.2013.12.012
Dial, M. (2018). Heartificial empathy, putting heart into business and artificial intelligence. London: Digital Proof Press.
Doing-Harris, K., Mowery, D. L., Daniels, C., Chapman, W. W., and Conway, M. (2017). Understanding patient satisfaction with received healthcare services: a natural language processing approach. AMIA Ann. Symp. Proc. 2016, 524–533.
Doraiswamy, P. M., Blease, C., and Bodner, K. (2020). Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif Intell Med. 102:101753. doi: 10.1016/j.artmed.2019.101753
Dori, D., and Sillitto, H. (2017). What is a system? An ontological framework. Syst. Engin. 20, 207–219. doi: 10.1002/sys.21383
Dragano, N., and Lunau, T. (2020). Technostress at work and mental health: concepts and research results. Curr. Opin. Psychiatry. 33, 407–413. doi: 10.1097/YCO.0000000000000613
Durkin, J., Jackson, D., and Usher, K. (2021). Compassionate practice in a hospital setting. experiences of patients and health professionals: a narrative inquiry. J. Adv. Nurs. 78, 1112–1127. doi: 10.1111/jan.15089
Durkin, M., Gurbutt, R., and Carson, J. (2018). Qualities, teaching, and measurement of compassion in nursing: a systematic review. Nurse Educ. Today 63, 50–58. doi: 10.1016/j.nedt.2018.01.025
Eagle, T., Blau, C., Bales, S., Desai, N., Li, V., and Whittaker, S. (2022). “I don’t know what you mean by i am anxious”: a new method for evaluating conversational agent responses to standardized mental health inputs for anxiety and depression. ACM Trans. Int. Intelli. Syst. 12, 1–23. doi: 10.1145/3488057
Ehret, A., Joormann, J., and Berking, M. (2015). Examining risk and resilience factors for depression: the role of self-criticism and self-compassion. Cogn. Emot. 29, 1496–1504. doi: 10.1080/02699931.2014.992394
Elkin, B. (2021). When empathy matters most. Am. J. Med. 134, 1304–1305. doi: 10.1016/j.amjmed.2021.05.004
Esmaeilzadeh, P., Mirzaei, T., and Dharanikota, S. (2021). Patients’ perceptions toward human-artificial intelligence interaction in health care: experimental study. J. Med. Int. Res. 23:e25856. doi: 10.2196/25856
Falconer, C. J., Slater, M., Rovira, A., King, J. A., Gilbert, P., Antley, A., et al. (2014). Embodying compassion: a virtual reality paradigm for overcoming excessive self-criticism. PLoS One 9:e111933. doi: 10.1371/journal.pone.0111933
Fernandez-Luque, L., and Imran, M. (2018). Humanitarian health computing using artificial intelligence and social media: a narrative literature review. Int. J. Med. Inform. 114, 136–142. doi: 10.1016/j.ijmedinf.2018.01.015
Fernando, A. T. III, and Consedine, N. S. (2014). Beyond compassion fatigue: the transactional model of physician compassion. J. Pain Sympt. Manage. 48, 289–298. doi: 10.1016/j.jpainsymman.2013.09.014
Field, T. (2014). Massage therapy research review. Comple. Ther. Clin. Pract. 20, 224–229. doi: 10.1016/j.ctcp.2014.07.002
Figley, C., and Regan Figley, K. (2017). “Compassion fatigue resilience,” in The oxford handbook of compassion science, eds M. S. Emma, E. Simon-Thomas, S. L. Brown, M. C. Worline, C. D. Cameron, and J. R. Doty doi: 10.1093/oxfordhb/9780190464684.013.28
Figueroa, C. A., Luo, T., Aguilera, A., and Lyles, C. R. (2021). The need for feminist intersectionality in digital health. Lancet Digital Health 3, e526–e533. doi: 10.1016/S2589-7500(21)00118-7
Flemotomos, N., Martinez, V. R., Chen, Z., Singla, K., Ardulov, V., Peri, R., et al. (2022). Automated evaluation of psychotherapy skills using speech and language technologies. Behav. Res. Methods 54, 690–711. doi: 10.3758/s13428-021-01623-4
Fleury-Perkins, C., and Paris, M. (2019). L’intelligence artificielle, réflexion philosophique [artificial intelligence, philosophical reflection]. Soins Revue Reference Infirm. 64, 24–27. doi: 10.1016/j.soin.2019.05.002
Flores, R., and Brown, P. (2018). The changing place of care and compassion within the english NHS: an eliasean perspective. Soc. Theory Health 16, 156–171. doi: 10.1057/s41285-017-0049-y
Ford, M. E. (1992). Summary of motivational systems theory in motivating humans: goals, emotions, and personal agency beliefs. Thousand Oaks, CA: SAGE Publications, Inc, 244–257.
Fotaki, M. (2015). Why and how is compassion necessary to provide good quality healthcare? Int. J. Health Policy Manage. 4:199. doi: 10.15171/ijhpm.2015.66
Francis, K. B., Gummerum, M., Ganis, G., Howard, I. S., and Terbeck, S. (2018). Virtual morality in the helping professions: simulated action and resilience. Br. J. Psychol. 109, 442–465. doi: 10.1111/bjop.12276
Friis, A., Johnson, M., Cutfield, R., and Consedine, N. (2016). Kindness matters: a randomized controlled trial of a mindful self-compassion intervention improves depression, distress, and HbA1c Among patients with diabetes. Diab. Care 39, 1963–1971. doi: 10.2337/dc16-0416
Fritzsche, H., Barbazzeni, B., Mahmeen, M., Haider, S., and Friebe, M. (2021). A structured pathway toward disruption: a novel healthtec innovation design curriculum with entrepreneurship in mind. Front. Public Health 9:715768. doi: 10.3389/fpubh.2021.715768
Gallagher, A., and Wainwright, P. (2005). The ethical divide. Nurs. Stand. 20, 22–25. doi: 10.7748/ns.20.7.22.s25
Gallos, P., Menychtas, A., Panagopoulos, C., Kaselimi, M., Temenos, A., Rallis, I., et al. (2022). Using mHealth technologies to promote public health and well-being in urban areas with blue-green solutions. Stud. Health Technol. Inform. 295, 566–569. doi: 10.3233/SHTI220791
García, J. A., Paterniti, D. A., Romano, P. S., and Kravitz, R. L. (2003). Patient preferences for physician characteristics in university-based primary care clinics. Ethn. Dis. 13, 259–267.
Gavarkovs, A. G. (2019). Behavioral counseling training for primary care providers: immersive virtual simulation as a training tool. Front. Public Health 7:116. doi: 10.3389/fpubh.2019.00116
Giambattista, A., Teixeira, L., Ayanoðlu, H., Saraiva, M., and Duarte, E. (2016). “Expression of emotions by a service robot: a pilot study,” in Design, user experience, and usability: technological contexts. DUXU 2016, Vol. 9748, ed. A. Marcus (Cham: Springer), doi: 10.1007/978-3-319-40406-6_31
Gilbert, P. (2014). The origins and nature of compassion focused therapy. Br. J. Clin. Psychol. 53, 6–41. doi: 10.1111/bjc.12043
Gilbert, P. (2019). Explorations into the nature and function of compassion. Curr. Opin. Psychol. 28, 108–114. doi: 10.1016/j.copsyc.2018.12.002
Gilbert, P. (2021). Compassion and safe relating series. Available online at: https://www.youtube.com/playlist?list=PL15t-W6V_jQi75LGjh3Q-4bhgytLO_Hml (accessed December 14, 2022).
Gillespie, G. L., Farra, S., Regan, S. L., and Brammer, S. V. (2021). Impact of immersive virtual reality simulations for changing knowledge, attitudes, and behaviors. Nurse Educ. Today 105:105025. doi: 10.1016/j.nedt.2021.105025
Goetz, J., Keltner, D., and Simon-Thomas, E. (2010). Compassion: an evolutionary analysis and empirical review. Psychol. Bull. 136, 351–374. doi: 10.1037/a0018807
Goldberg, M. J. (2020). Compassionate care: making it a priority and the science behind it. J. Pediatr. Orthop. 40, S4–S7. doi: 10.1097/BPO.0000000000001502
Gray, A., and Cox, J. (2015). The roots of compassion and empathy: implementing the francis report and the search for new models of health care. Eur. J. Person Centered Health. 3, 122–130. doi: 10.5750/ejpch.v3i1.962
Greinacher, A., Helaß, M., Nikendei, C., Müller, A., Mulfinger, N., Gündel, H., et al. (2022). The impact of personality on intention to leave the nursing profession: a structural equation model. J. Clin. Nurs. 31, 1570–1579. doi: 10.1111/jocn.16010
Grekin, E. R., Beatty, J. R., and Ondersma, S. J. (2019). Mobile health interventions: exploring the use of common relationship factors. JMIR mHealth 7:e11245. doi: 10.2196/11245
Groza, H. L., Sebesi, S. B., and Mandru, D. S. (2017). “Age simulation suits for training, research and development,” in Proceeding of the international conference on advancements of medicine and health care through technology; 12th - 15th october 2016, cluj-napoca, romania, (Cham: Springer), doi: 10.1007/978-3-319-52875-5_17
Grzybowski, A., Brona, P., Lim, G., Ruamviboonsuk, P., Tan, G. S., Abramoff, M., et al. (2020). Artificial intelligence for diabetic retinopathy screening: a review. Eye 34, 451–460. doi: 10.1038/s41433-019-0566-0
Gu, J., Cavanagh, K., Baer, R., and Strauss, C. (2017). An empirical examination of the factor structure of compassion. PLoS One 12:e0172471. doi: 10.1371/journal.pone.0172471
Guetterman, T. C., Kron, F. W., Campbell, T. C., Scerbo, M. W., Zelenski, A. B., Cleary, J. F., et al. (2017). Initial construct validity evidence of a virtual human application for competency assessment in breaking bad news to a cancer patient. Adv. Med. Educ. Pract. 25, 505–512. doi: 10.2147/AMEP.S138380
Guetterman, T. C., Sakakibara, R., Baireddy, S., Kron, F. W., Scerbo, M. W., Cleary, J. F., et al. (2019). Medical students’ experiences and outcomes using a virtual human simulation to improve communication skills: mixed methods study. J. Med. Int. Res. 21:e15459. doi: 10.2196/15459
Hagman, W., Tinghög, G., Dickert, S., Slovic, P., and Västfjäll, D. (2022). Motivated down-regulation of emotion and compassion collapse revisited. Front. Psychol. 13:801150. doi: 10.3389/fpsyg.2022.801150
Håkansson Eklund, J., and Summer Meranius, M. (2021). Toward a consensus on the nature of empathy: a review of reviews. Patient Educ. Couns. 104, 300–307. doi: 10.1016/j.pec.2020.08.022
Halan, S., Sia, I., Crary, M., and Lok, B. (2015). “Exploring the effects of healthcare students creating virtual patients for empathy training,” in Intelligent virtual agents. IVA 2015. lecture notes in computer science, eds W. Brinkman, J. Broekens, and D. Heylen (Cham: Springer), 9238. doi: 10.1007/978-3-319-21996-7_24
Harris, J. (2021). Editorial commentary: personalized hip arthroscopy outcome prediction using machine learning-the future is here. Arthroscopy 37, 1498–1502. doi: 10.1016/j.arthro.2021.02.032
Harris, R., Sims, S., Leamy, M., Levenson, R., Davies, N., Brearley, S., et al. (2019). Intentional rounding in hospital wards to improve regular interaction and engagement between nurses and patients: a realist evaluation. Southampton (UK): NIHR Journals Library.
Hayakawa, J., Barrows, J., See, S., and Schomberg, J. (2022). Effects of classical music virtual reality on pediatric healthcare worker compassion fatigue. J. Nurs. Administr. 52, 280–285. doi: 10.1097/NNA.0000000000001148
Hazarika, I. (2020). Artificial intelligence: opportunities and implications for the health workforce. Int. Health 12, 241–245. doi: 10.1093/inthealth/ihaa007
He, L., Basar, E., Wiers, R. W., Antheunis, M. L., and Krahmer, E. (2022). Can chatbots help to motivate smoking cessation? A study on the effectiveness of motivational interviewing on engagement and therapeutic alliance. BMC Public Health 22:726. doi: 10.1186/s12889-022-13115-x
Hendry, J. (2019). Promoting compassionate care in radiography - what might be suitable pedagogy? A discussion paper. Radiography 25, 269–273. doi: 10.1016/j.radi.2019.01.005
Hernandez, J. (2019). Network diffusion and technology acceptance of a nurse chatbot for chronic disease self-management support: a theoretical perspective. J. Med. Invest. JMI 66, 24–30. doi: 10.2152/jmi.66.24
Herrmann-Werner, A., Loda, T., Zipfel, S., Holderried, M., Holderried, F., and Erschens, R. (2021). Evaluation of a language translation app in an undergraduate medical communication course: proof-of-concept and usability study. JMIR mHealth 9:e31559. doi: 10.2196/31559
Hershberger, P. J., Pei, Y., Crawford, T. N., Neeley, S. M., Wischgoll, T., Patel, D. B., et al. (2022). An interactive game with virtual reality immersion to improve cultural sensitivity in health care. Health Equity. 6, 189–197. doi: 10.1089/heq.2021.0128
Hess, S. P., Levin, M., Akram, F., Woo, K., Andersen, L., Trenkle, K., et al. (2022). The impact and feasibility of a brief, virtual, educational intervention for home healthcare professionals on Parkinson’s disease and related disorders: pilot study of I SEE PD home. BMC Med. Educ. 22:506. doi: 10.1186/s12909-022-03430-7
Hirt, J., and Beer, T. (2020). Use and impact of virtual reality simulation in dementia care education: a scoping review. Nurse Educ. Today. 84:104207. doi: 10.1016/j.nedt.2019.104207
Ho, A. J., Turnbull, J., and Fossat, Y. (2017). Compassion through tele-empathy: technology-mediated symptom transference. Fut. Health. J. 4, 219–220. doi: 10.7861/futurehosp.4-3-219
Hopkins, C. M., Miller, H. N., Brooks, T. L., Mo-Hunter, L., Steinberg, D. M., and Bennett, G. G. (2021). Designing ruby: protocol for a 2-arm, brief, digital randomized controlled trial for internalized weight bias. JMIR Res. Protoc. 10:e31307. doi: 10.2196/31307
Hou, I. C., Lan, M. F., Shen, S. H., Tsai, P. Y., Chang, K. J., Tai, H. C., et al. (2020). The development of a mobile health app for breast cancer self-management support in taiwan: design thinking approach. JMIR mHealth uHealth 8:e15780. doi: 10.2196/15780
Hutson, E., Kelly, S., and Militello, L. K. (2018). Systematic review of cyberbullying interventions for youth and parents with implications for evidence-based practice. World. Evid. Based Nurs. 15, 72–79. doi: 10.1111/wvn.12257
Inkster, B., Sarda, S., and Subramanian, V. (2018). An empathy-driven, conversational artificial intelligence agent (wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR mHealth uHealth 6:e12106. doi: 10.2196/12106
Irfan, F. (2021). Artificial intelligence: help or hindrance for family physicians? Pak. J. Med. Sci. 37, 288–291. doi: 10.12669/pjms.37.1.3351
Jacobs, C., and Maidwell-Smith, A. (2022). Learning from 360-degree film in healthcare simulation: a mixed methods pilot. J. Visual Commun. Med. 2022:7059. doi: 10.1080/17453054.2022.2097059
James, J., Balamurali, B. T., Watson, C. I., et al. (2021). Empathetic speech synthesis and testing for healthcare robots. Int. J. Soc. Rob. 13, 2119–2137. doi: 10.1007/s12369-020-00691-4
Jie, L. J., Jamin, G., Smit, K., Beurskens, A., and Braun, S. (2020). Design of the user interface for “stappy”, a sensor-feedback system to facilitate walking in people after stroke: a user-centred approach. Disabil Rehabil Assist Technol. 15, 959–967. doi: 10.1080/17483107.2019.1629654
Jiménez-Rodríguez, D., Pérez-Heredia, M., Molero Jurado, M. D. M., Pérez-Fuentes, M. D. C., and Arrogante, O. (2021). Improving humanization skills through simulation-based computers using simulated nursing video consultations. Healthcare 10:37. doi: 10.3390/healthcare10010037
Joda, T., Bornstein, M. M., Jung, R. E., Ferrari, M., Waltimo, T., and Zitzmann, N. U. (2020). Recent trends and future direction of dental research in the digital era. Int. J. Environ. Res. Public Health. 17:1987. doi: 10.3390/ijerph17061987
Johanson, D. L., Ahn, H. S., and Broadbent, E. (2021). Improving interactions with healthcare robots: a review of communication behaviours in social and healthcare contexts. Int. J. Soc. Robot. 13, 1835–1850. doi: 10.1007/s12369-020-00719-9
Johanson, D. L., Ahn, H. S., MacDonald, B. A., Ahn, B. K., Lim, J., Hwang, E., et al. (2019). The effect of robot attentional behaviors on user perceptions and behaviors in a simulated health care interaction: randomized controlled trial. J. Med. Int. Res. 21:e13667. doi: 10.2196/13667
Johnston, S. C. (2018). Anticipating and training the physician of the future: the importance of caring in an age of artificial intelligence. Acad. Med. J. Assoc. Am. Med. Coll. 93, 1105–1106. doi: 10.1097/ACM.0000000000002175
Jones, C., Jones, D., and Moro, C. (2021). Use of virtual and augmented reality-based interventions in health education to improve dementia knowledge and attitudes: an integrative review. BMJ Open. 11:e053616. doi: 10.1136/bmjopen-2021-053616
Jones-Schenk, J. (2016). Getting to the root of disparities: social cognition and the affective domain. J. Contin. Educ. Nurs. 47, 443–445. doi: 10.3928/00220124-20160920-04
Jütten, L. H., Mark, R. E., and Sitskoorn, M. M. (2018). Can the mixed virtual reality simulator into d’mentia enhance empathy and understanding and decrease burden in informal dementia caregivers? Dement Geriatr Cogn. Dis. Extra. 8, 453–466. doi: 10.1159/000494660
Kelly, M., Nixon, L., Rosenal, T., Crowshoe, L., Harvey, A., Tink, W., et al. (2020). Being vulnerable: a qualitative inquiry of physician touch in medical education. Acad. Med. J. Assoc. Am. Med. Coll. 95, 1893–1899. doi: 10.1097/ACM.0000000000003488
Kemp, J., Zhang, T., Inglis, F., Wiljer, D., Sockalingam, S., Crawford, A., et al. (2020). Delivery of compassionate mental health care in a digital technology-driven age: scoping review. J. Med. Int. Res. 22:e16263. doi: 10.2196/16263
Kennedy, C. M., Powell, J., Payne, T. H., Ainsworth, J., Boyd, A., and Buchan, I. (2012). Active assistance technology for health-related behavior change: an interdisciplinary review. J. Med. Int. Res. 14:e80. doi: 10.2196/jmir.1893
Kerasidou, A. (2020). Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ. 98, 245–250. doi: 10.2471/BLT.19.237198
Kerr, D., and Klonoff, D. C. (2019). Digital diabetes data and artificial intelligence: a time for humility not hubris. J. Diab. Sci. Technol. 13, 123–127. doi: 10.1177/1932296818796508
Kerruish, E. (2021). Assembling human empathy towards care robots: the human labor of robot sociality. Emot. Space Soc. 41:100840. doi: 10.1016/j.emospa.2021.100840
Kim, H., and Chun, J. (2022). Effects of a patient experience–based virtual reality blended learning program on nursing students. CIN Comput. Inform. Nurs. 40, 438–446. doi: 10.1097/CIN.0000000000000817
Kim, J. J., Parker, S. L., Doty, J. R., Cunnington, R., Gilbert, P., and Kirby, J. N. (2020). Neurophysiological and behavioural markers of compassion. Sci. Rep. 10:6789. doi: 10.1038/s41598-020-63846-3
Kim, J. W., Kim, S. E., Kim, J. J., Jeong, B., Park, C. H., Son, A. R., et al. (2009). Compassionate attitude towards others’ suffering activates the mesolimbic neural system. Neuropsychologia 47, 2073–2081. doi: 10.1016/j.neuropsychologia.2009.03.017
Kipnis, E., McLeay, F., Grimes, A., de Saille, S., and Potter, S. (2022). Service robots in long-term care: a consumer-centric view. J. Serv. Res. 2022:10849. doi: 10.1177/10946705221110849
Kocaballi, B., Ijaz, K., Laranjo, L., Quiroz, J., Rezazadegan, D., Ly Tong, H., et al. (2020a). Envisioning an artificial intelligence documentation assistant for future primary care consultations: a co-design study with general practitioners. J. Am. Med. Inform. Assoc. 27, 1695–1704. doi: 10.1093/jamia/ocaa131
Kocaballi, B., Quiroz, J. C., Rezazadegan, D., Berkovsky, S., Magrabi, F., Coiera, E., et al. (2020b). Responses of conversational agents to health and lifestyle prompts: investigation of appropriateness and presentation structures. J. Med. Int. Res. 22:e15823. doi: 10.2196/15823
Konstantinidis, S., Leonardini, L., Stura, C., et al. (2022). “Digital soft skills of healthcare workforce – identification, prioritization and digital training,” in Mobility for smart cities and regional development - challenges for higher education. ICL 2021, eds M. E. Auer, H. Hortsch, O. Michler, and T. Köhler (Cham: Springer), doi: 10.1007/978-3-030-93907-6_117
Koopmann-Holm, B., and Tsai, J. L. (2017). “’The cultural shaping of compassion,” in The oxford handbook of compassion science, ed. M. Emma (Oxford Library of Psychology), doi: 10.1093/oxfordhb/9780190464684.013.21
Kovalchuk, Y., Budini, E., Cook, R. M., and Walsh, A. (2022). Investigating the relationship between facial mimicry and empathy. Behav. Sci. 12:250. doi: 10.3390/bs12080250
Krieger, J. L., Neil, J. M., Duke, K. A., Zalake, M. S., Tavassoli, F., Vilaro, M. J., et al. (2021). A pilot study examining the efficacy of delivering colorectal cancer screening messages via virtual health assistants. Am. J. Prev. Med. 61, 251–255. doi: 10.1016/j.amepre.2021.01.014
La Torre, G., De Leonardis, V., and Chiappetta, M. (2020). Technostress: how does it affect the productivity and life of an individual? Results of an observational study. Public Health. 189, 60–65. doi: 10.1016/j.puhe
Lansing, J. S. (2003). Complex adaptive systems. Ann. Rev. Anthr. 32, 183–204. doi: 10.1146/annurev.anthro.32.061002.093440
Larson, E. B., and Yao, X. (2005). Clinical empathy as emotional labor in the patient-physician relationship. JAMA 293, 1100–1106. doi: 10.1001/jama.293.9.1100
Law, M., Sutherland, C., Ahn, H. S., MacDonald, B. A., Peri, K., Johanson, D. L., et al. (2019). Developing assistive robots for people with mild cognitive impairment and mild dementia: a qualitative study with older adults and experts in aged care. BMJ Open. 9:e031937. doi: 10.1136/bmjopen-2019-031937
Lee, E. E., Torous, J., De Choudhury, M., Depp, C. A., Graham, S. A., Kim, H. C., et al. (2021). Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biol. Psychiatry Cogn. Neurosci. Neuroimag. 6, 856–864. doi: 10.1016/j.bpsc.2021.02.001
Lee, M., Ackermans, S., van As, N., Chang, H., Lucas, E., and IJsselsteijn, W. (2019). “Caring for vincent: a chatbot for self-compassion,” in Proceeding of the CHI conference on human factors in computing systems proceedings (CHI 2019). doi: 10.1145/3290605.3300932
Leffel, G. M., Oakes Mueller, R. A., Ham, S. A., Karches, K. E., Curlin, F. A., and Yoon, J. D. (2018). Project on the good physician: further evidence for the validity of a moral intuitionist model of virtuous caring. Teach. Learn. Med. 30, 303–316.
Levac, D., Colquhoun, H., and O’Brien, K. K. (2010). Scoping studies: advancing the methodology. Imple. Sci. 5:69. doi: 10.1186/1748-5908-5-69
Levett-Jones, T., Lapkin, S., Govind, N., Pich, J., Hoffman, K., Jeong, S. Y., et al. (2017). Measuring the impact of a ‘point of view’ disability simulation on nursing students’ empathy using the comprehensive state empathy scale. Nurse Educ. Today 59, 75–81. doi: 10.1016/j.nedt.2017.09.007
Levin, S. (2003). Complex adaptive systems: exploring the known, the unknown and the unknowable. Bull. Am. Mathe. Soc. 40, 3–19. doi: 10.1090/S0273-0979-02-00965-5
Lindner, P. (2021). Better, virtually: the past, present, and future of virtual reality cognitive behavior therapy. J. Cogn. Ther. 14, 23–46. doi: 10.1007/s41811-020-00090-7
Ling, J., Hong, J. C., Hayashi, Y., Yasuda, K., Kitaji, Y., Harashima, H., et al. (2020). A haptic-based perception-empathy biofeedback system with vibration transition: verifying the attention amount. Ann. Int. Conf. IEEE Eng. Med. Biol. Soc. 2020, 3779–3782. doi: 10.1109/EMBC44109.2020.9176213
Liu, B., and Sundar, S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychol. Behav. Soc. Netw. 21, 625–636. doi: 10.1089/cyber.2018.0110
Locsin, R. C. (2017). The co-existence of technology and caring in the theory of technological competency as caring in nursing. J. Med. Invest. JMI 64, 160–164. doi: 10.2152/jmi.64.160
Loftus, T. J., Filiberto, A. C., Balch, J., Ayzengart, A. L., Tighe, P. J., Rashidi, P., et al. (2020). Intelligent, autonomous machines in surgery. J. Surg. Res. 253, 92–99. doi: 10.1016/j.jss.2020.03.046
Louie, A. K., Coverdale, J. H., Balon, R., Beresin, E. V., Brenner, A. M., Guerrero, A., et al. (2018). Enhancing empathy: a role for virtual reality? Acad. Psychiatry J. Am. Assoc. Direct. Psychiatric Res. Train. Assoc. Acad. Psychiatry 42, 747–752. doi: 10.1007/s40596-018-0995-2
Loveys, K., Sagar, M., Billinghurst, M., Saffaryazdi, N., and Broadbent, E. (2022). “Exploring empathy with digital humans, 2022,” in Proceeding of the IEEE conference on virtual reality and 3D user interfaces abstracts and workshops (VRW), 233–237. doi: 10.1109/VRW55335.2022.00055
Lown, B. A. (2016). A social neuroscience-informed model for teaching and practicing compassion in health care. Med. Educ. 50:332. doi: 10.1111/medu.12926
Lown, B. A., Muncer, S. J., and Chadwick, R. (2016). Can compassionate healthcare be measured? The schwartz center compassionate care scale™. Patient Educ. Couns. 98, 1005–1010. doi: 10.1016/j.pec.2015.03.019
Lown, B. A., Shin, A., and Jones, R. N. (2019). Can organizational leaders sustain compassionate, patient-centered care and mitigate burnout? J. Health. Manage. Am. Coll. Health. Execut. 64, 398–412. doi: 10.1097/JHM-D-18-00023
Lupton, D. (2017). Digital health: Critical and cross-disciplinary perspectives. Critical approaches to health, 1st Edn. London: Routledge, 178. doi: 10.4324/9781315648835
Ma, Z., Huang, K. T., and Yao, L. (2021). Feasibility of a computer role-playing game to promote empathy in nursing students: the role of immersiveness and perspective. Cyberpsychol. Behav. Soc. Net. 24, 750–755. doi: 10.1089/cyber.2020.0371
Majid, S., Reeves, S., Figueredo, G., Brown, S., Lang, A., Moore, M., et al. (2021). The extent of user involvement in the design of self-tracking technology for bipolar disorder: literature review. JMIR Mental Health 8:e27991. doi: 10.2196/27991
Malenfant, S., Jaggi, P., Hayden, K. A., and Sinclair, S. (2022). Compassion in healthcare: an updated scoping review of the literature. BMC Palliat. Care 21:80. doi: 10.1186/s12904-022-00942-3
Marcoux, A., Tessier, M. H., Grondin, F., Reduron, L., and Jackson, P. L. (2021). Basic, clinical and social perspectives on the use of virtual characters in mental health. Sante Ment Que. 46, 35–70. doi: 10.7202/1081509ar
Márquez Sánchez, S., Mora-Simon, S., Herrera-Santos, J., Roncero, A. O., and Corchado, J. M. (2020). Intelligent dolls and robots for the treatment of elderly people with dementia. ADCAIJ Adv. Distrib. Comput. Artifi. Int. J. Regular Issue 9, 99–11. doi: 10.14201/ADCAIJ20209199112
Martinez-Martin, Nicole, Luo, Zelun, Kaushal, Amit, Adeli, Ehsan, Haque, Albert, Kelly, Sara S, et al. (2021). Ethical issues in using ambient intelligence in health-care settings. Lancet Digital health 3, e115–e123. doi: 10.1016/S2589-7500(20)30275-2
Mason, C. (2021). Artificial compassion-from an AI scholar, religion. Available online at: https://www.preprints.org/manuscript/202104.0784/v1 (accessed December 14, 2022).
Mason, C. (2023). Designing artificial compassion technology–Infusing technology with human compassion. Seattle, WA: Amazon Publishing. (in press)
Mays, N., Roberts, E., and Popay, J. (2001). “Synthesising research evidence,” in Studying the organisation and delivery of health services: research methods, Chap. London, eds N. Fulop, P. Allen, A. Clarke, and N. Black (Routledge).
McCarthy, S., O’Raghallaigh, P., Woodworth, S., Lim, Y. Y., Kenny, L. C., and Adam, F. (2020). Embedding the pillars of quality in health information technology solutions using “integrated patient journey mapping” (IPJM): case study. JMIR Hum. Factors. 7:e17416. doi: 10.2196/17416
McLoughlin, C., Patel, K. D, Callaghan, T., and Reeves, S. (2018). The use of virtual communities of practice to improve interprofessional collaboration and education: findings from an integrated review. J. Int. Care 32, 136–142. doi: 10.1080/13561820.2017.1377692
Mesko, B. (2017). The role of artificial intelligence in precision medicine. Exp. Rev. Precis. Med. Drug. Dev. 2, 239–241. doi: 10.1080/23808993.2017.1380516
Michael, S. H., Villarreal, P. M., Ferguson, M. F., Wiler, J. L., Zane, R. D., and Flarity, K. (2019). Virtual reality-based resilience programs: feasibility and implementation for inpatient oncology nurses. Clin. J. Oncol. Nurs. 23, 664–667. doi: 10.1188/19.CJON.664-667
Mid Staffordshire NHS Foundation Trust Public Inquiry (2013). Report of the mid staffordshire NHS foundation trust public inquiry: executive summary. Available online at: https://www.gov.uk/government/publications/report-of-the-mid-staffordshire-nhs-foundation-trust-public-inquiry (accessed December 14, 2022).
Milcent, A., Kadri, A., and Richir, S. (2021). Using facial expressiveness of a virtual agent to induce empathy in users. Int. J. Hum. Comput. Int. 38, 240–252. doi: 10.1080/10447318.2021.1938387
Mills, J., Wand, T., and Fraser, J. A. (2017). Palliative care professionals’ care and compassion for self and others: a narrative review. Int. J. Palliat. Nurs. 23, 219–229. doi: 10.12968/ijpn.2017.23.5.219
Miloff, A., Carlbring, P., Hamilton, W., Andersson, G., Reuterskiöld, L., and Lindner, P. (2020). Measuring alliance toward embodied virtual therapists in the era of automated treatments with the virtual therapist alliance scale (VTAS): development and psychometric evaluation. J. Med. Int. Res. 22:e16660. doi: 10.2196/16660
Mirkovic, J., Jessen, S., Kristjansdottir, O. B., Krogseth, T., Koricho, A. T., and Ruland, C. M. (2018). Developing technology to mobilize personal strengths in people with chronic illness: positive codesign approach. JMIR Format. Res. 2:e10774. doi: 10.2196/10774
Mitchell, S., Heyden, R., Heyden, N., Schroy, P., Andrew, S., Sadikova, E., et al. (2011). A pilot study of motivational interviewing training in a virtual world. J. Med. Int. Res. 13:e77. doi: 10.2196/jmir.1825
Montayre, J. (2018). Nursing the future: braving possibilities, challenges and dilemmas. Nurs. Praxis New Zealand 34, 5–6. doi: 10.36951/NgPxNZ.2018.001
Montemayor, C., Halpern, J., and Fairweather, A. (2021). In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare. AI Soc. [Epub ahead of print]. doi: 10.1007/s00146-021-01230-z
Moosaei, M., Das, S. K., Popa, D. O., and Riek, L. D. (2017). “Using facially expressive robots to calibrate clinical pain perception,” in Proceeding of the 2017 12th ACM/IEEE international conference on human-robot interaction, HRI, 32–41. doi: 10.1145/2909824.3020216
Morley, J., Machado, C. C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., et al. (2020). The ethics of AI in health care: a mapping review. Soc. Sci. Med. 260:113172. doi: 10.1016/j.socscimed.2020.113172
Nadin, M. (2020). Aiming AI at a moving target: health (or disease). AI Soc. 2020, 1–9. doi: 10.1007/s00146-020-00943-x
Navarrete, J., Martínez-Sanchis, M., Bellosta-Batalla, M., Baños, R., Cebolla, A., and Herrero, R. (2021). Compassionate embodied virtual experience increases the adherence to meditation practice. Appl. Sci. 11:1276. doi: 10.3390/app11031276
Nichols, J. A., Herbert Chan, H. W., and Baker, M. A. (2019). Machine learning: applications of artificial intelligence to imaging and diagnosis. Biophys. Rev. 11, 111–118. doi: 10.1007/s12551-018-0449-9
Nieto, I., Velasco, M., and Miranda, C. (2021). Tacit engagement using tablet-mediated learning for social good. Ai Soc. 2021, 1–5.
Nisha, N., Iqbal, M., and Rifat, A. (2019). The changing paradigm of health and mobile phones: an innovation in the health care system. J. Glob. Inf. Manag. 27, 19–46.
O’Gara, G., Murray, L., Georgopoulou, S., Anstiss, T., Macquarrie, A., Wheatstone, P., et al. (2022). Safe space: what is the feasibility and acceptability of a codesigned virtual reality intervention, incorporating compassionate mind training, to support people undergoing cancer treatment in a clinical setting? BMJ Open. 12:e047626. doi: 10.1136/bmjopen-2020-047626
Oh, K. J., Lee, D., Ko, B., Hyeon, J., and Choi, H. J. (2017). Empathy bot: conversational service for psychiatric counseling with chat assistant. Stud. Health Technol. Inform. 245:1235.
Okoli, C., Seng, S., Otachi, J. K., Higgins, J. T., Lawrence, J., Lykins, A., et al. (2020). A cross-sectional examination of factors associated with compassion satisfaction and compassion fatigue across healthcare workers in an academic medical centre. Int. J. Mental Health Nurs. 29, 476–487. doi: 10.1111/inm.12682
Osis, F. (2021). “Inform the head, give dexterity to the hand, familiarise the heart”: seeing and using digitised eighteenth-century specimens in a modern medical curriculum. Adv. Exp. Med. Biol. 1317, 163–179. doi: 10.1007/978-3-030-61125-5_9
Ostherr, K. (2022). Artificial intelligence and medical humanities. J. Med. Hum. 43, 211–232. doi: 10.1007/s10912-020-09636-4
Oszutowska-Mazurek, D., Fastowicz, J., and Mazurek, P. (2021). The associations between knowledge and behaviours related to touch screens and microbiological threats among IT students. Int. J. Environ. Res. Public Health 18:9269. doi: 10.3390/ijerph18179269
Pagliari, C. (2021). Digital health and primary care: past, pandemic and prospects. J. Glob Health. 11:01005. doi: 10.7189/jogh.11.01005
Palanica, A., Flaschner, P., Thommandram, A., Li, M., and Fossat, Y. (2019). Physicians’ perceptions of chatbots in health care: cross-sectional web-based survey. J. Med. Int. Res. 21:12887. doi: 10.2196/12887
Palanica, A., Thommandram, A., and Fossat, Y. (2018). Eliciting clinical empathy via transmission of patient-specific symptoms of Parkinson’s disease. Cogent Psychol. 5, 2331–1908.
Palgi, S., Klein, E., and Shamay-Tsoory, S. G. (2016). Oxytocin improves compassion toward women among patients with PTSD. Psychoneuroendocrinology 64, 143–149. doi: 10.1016/j.psyneuen.2015.11.008
Palmer, A., and Schwan, D. (2022). Beneficent dehumanization: employing artificial intelligence and carebots to mitigate shame-induced barriers to medical care. Bioethics. 36, 187–193. doi: 10.1111/bioe.12986
Panzarasa, P., Griffiths, C. J., Sastry, N., and De Simoni, A. (2020). Social medical capital: how patients and caregivers can benefit from online social interactions. J. Med. Int. Res. 22:e16337. doi: 10.2196/16337
Papadakos, J., Trang, A., Cyr, A. B., Abdelmutti, N., Giuliani, M. E., Snow, M., et al. (2017). Deconstructing cancer patient information seeking in a consumer health library toward developing a virtual information consult for cancer patients and their caregivers: a qualitative, instrumental case study. JMIR Cancer 3:e6. doi: 10.2196/cancer.6933
Papadopoulos, I., and Ali, S. (2016). Measuring compassion in nurses and other healthcare professionals: an integrative review. Nurse Educ. Pract. 16, 133–139. doi: 10.1016/j.nepr.2015.08.001
Parsons, T. (1982). “Action, symbol, and cybernetic control,” in Structural sociology, ed. I. Rossi (New York: Columbia University Press).
Patel, D., Hawkins, J., Chehab, L. Z., Martin-Tuite, P., Feler, J., Tan, A., et al. (2020). Developing virtual reality trauma training experiences using 360-degree video: tutorial. J. Med. Int. Res. 22:e22420. doi: 10.2196/22420
Pavlova, A., Wang, C. X. Y., Boggiss, A. L., Callaghan, A., and Consedine, N. S. (2022). Predictors of physician compassion, empathy, and related constructs: a systematic review. J. Gen. Int. Med. 37, 900–911. doi: 10.1007/s11606-021-07055-2
Pedersen, I., Reid, S., and Aspevig, K. (2018). Developing social robots for aging populations: a literature review of recent academic sources. Soc. Compass. 12:e12585. doi: 10.1111/soc4.12585
Pedersen, K. Z., and Roelsgaard Obling, A. (2019). Organising through compassion: the introduction of meta-virtue management in the NHS. Soc. Health Illness 41, 1338–1357. doi: 10.1111/1467-9566.12945
Pepito, J. A., Ito, H., Betriana, F., Tanioka, T., and Locsin, R. C. (2020). Intelligent humanoid robots expressing artificial humanlike empathy in nursing situations. Nurs Philos. 21:e12318. doi: 10.1111/nup.12318
Peters, M. D. J., Godfrey, C., McInerney, P., Munn, Z., Tricco, A. C., and Khalil, H. (2020). “Chapter 11: scoping reviews (2020 version),” in JBI manual for evidence synthesis, eds E. Aromataris and Z. Munn.
Plotzky, C., Lindwedel, U., Sorber, M., Loessl, B., König, P., Kunze, C., et al. (2021). Virtual reality simulations in nurse education: a systematic mapping review. Nurse Educ. Today 101:104868. doi: 10.1016/j.nedt.2021.104868
Portz, J. D., Ford, K. L., Doyon, K., Bekelman, D. B., Boxer, R. S., Kutner, J. S., et al. (2020). Using grounded theory to inform the human-centered design of digital health in geriatric palliative care. J. Pain Symptom Manage. 60, 1181–1192.e1. doi: 10.1016/j.jpainsymman.2020.06.027
Powell, J. (2019). Trust me, I’m a chatbot: how artificial intelligence in health care fails the turing test. J. Med. Int. Res. 21:e16222. doi: 10.2196/16222
Powell, R. E., Doty, A., Casten, R. J., Rovner, B. W., and Rising, K. L. (2016). A qualitative analysis of interprofessional healthcare team members’ perceptions of patient barriers to healthcare engagement. BMC Health Serv. Res. 16:1–10. doi: 10.1186/s12913-016-1751-5
Price, A. (2013). Caring and technology in an intensive care unit: an ethnographic study. Nurs. Crit. Care 18, 278–288.
Pulman, A., Taylor, J., Galvin, K., and Masding, M. (2013). Ideas and enhancements related to mobile applications to support type 1 diabetes. JMIR Mhealth Uhealth. 1:e12. doi: 10.2196/mhealth.2567
Qu, L., Gao, J., Liu, L., Lun, B., and Chen, D. (2022). Compassion fatigue and compassion satisfaction among Chinese midwives working in the delivery room: a cross-sectional survey. Midwifery 113:103427. doi: 10.1016/j.midw.2022.103427
Rahim, A. I., Ibrahim, M. I., Musa, K. I., Chua, S., and Yaacob, N. (2021). Assessing patient-perceived hospital service quality and sentiment in malaysian public hospitals using machine learning and facebook reviews. Int. J. Environ. Res. Public Health 2021:18.
Raja, M., Bjerkan, J., Kymre, I. G., Galvin, K. T., and Uhrenfeldt, L. (2021). Telehealth and digital developments in society that persons 75 years and older in European countries have been part of: a scoping review. BMC Health Serv. Res. 21:1157. doi: 10.1186/s12913-021-07154-0
Raman, R., and McClelland, L. E. (2019). Bringing compassion into information systems research: a research agenda and call to action. J. Inform. Technol. 34, 2–21.
Ramanayake, R., Wicke, P., and Nallur, V. (2022). Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI. AI Soc. 2022, 1–13. doi: 10.1007/s00146-022-01478-z
Riches, S., Iannelli, H., Reynolds, L., Fisher, H. L., Cross, S., and Attoe, C. (2022). Virtual reality-based training for mental health staff: a novel approach to increase empathy, compassion, and subjective understanding of service user experience. Adv. Simulat. 7:19. doi: 10.1186/s41077-022-00217-0
Riva, G., Villani, D., Cipresso, P., Repetto, C., Triberti, S., Di Lernia, D., et al. (2016). Positive and transformative technologies for active ageing. Stud. Health Technol. Inform. 220, 308–315.
Rodgers, R. F., Donovan, E., Cousineau, T., Yates, K., McGowan, K., Cook, E., et al. (2018). BodiMojo: efficacy of a mobile-based intervention in improving body image and self-compassion among adolescents. J. Youth Adolesc. 47, 1363–1372. doi: 10.1007/s10964-017-0804-3
Rosa, W. E. (2014). Nurse as athlete: an antidote for compassion fatigue. Arch. Psychiatric Nurs. 28, 224–225.
Rossi, S., Conti, D., Garramone, F., Santangelo, G., Staffa, M., Varrasi, S., et al. (2020). The role of personality factors and empathy in the acceptance and performance of a social robot for psychometric evaluations. Robotics 9:39.
Roswell, R. O., Cogburn, C. D., Tocco, J., Martinez, J., Bangeranye, C., Bailenson, J. N., et al. (2020). Cultivating empathy through virtual reality: advancing conversations about racism, inequity, and climate in medicine. Acad. Med. 95, 1882–1886. doi: 10.1097/ACM.0000000000003615
Saab, M. M., Landers, M., Murphy, D., O’Mahony, B., Cooke, E., O’Driscoll, M., et al. (2022). Nursing students’ views of using virtual reality in healthcare: a qualitative study. J. Clin. Nurs. 31, 1228–1242. doi: 10.1111/jocn.15978
Sabo, A., Mehdizadeh, S., Iaboni, A., and Taati, B. (2022). Estimating Parkinsonism severity in natural gait videos of older adults with dementia. IEEE J. Biomed. Health Inform. 26, 2288–2298. doi: 10.1109/JBHI.2022.3144917
Salvador Zaragozá, A., Soto-Rubio, A., Lacomba-Trejo, L., Valero-Moreno, S., and Pérez-Marín, M. (2021). Compassion in Spanish-speaking health care: a systematic review. Curr. Psychol. [Epub ahead of print]. doi: 10.1007/s12144-021-01994-z
Sanal, M. G., Paul, K., Kumar, S., and Ganguly, N. K. (2019). Artificial intelligence and deep learning: the future of medicine and medical practice. J. Assoc. Phys. India 67, 71–73.
Sanders, J. J., Caponigro, E., Ericson, J. D., Dubey, M., Duane, J. N., Orr, S. P., et al. (2021). Virtual environments to study emotional responses to clinical communication: a scoping review. Patient. Educ. Couns. 104, 2922–2935. doi: 10.1016/j.pec.2021.04.022
Sarkar, P. P., Tohin, M. A., Khaled, M. A., and Rahman, M. S. (2020). “Implementation of an instrumented crutch with scalable E-care architecture using IoT,” in Proceeding of the 2020 IEEE region 10 symposium, 242–245.
Schick, A., Paetzold, I., Rauschenberg, C., Hirjak, D., Banaschewski, T., Meyer-Lindenberg, A., et al. (2021). Effects of a novel, transdiagnostic, hybrid ecological momentary intervention for improving resilience in youth (emicompass): protocol for an exploratory randomized controlled trial. JMIR Res. Proto. 10:e27462. doi: 10.2196/27462
Schmidt, L. I., Schlomann, A., Gerhardy, T., and Wahl, H. (2022). “Aging means to me…that i feel lonely more often”? An experimental study on the effects of age simulation regarding views on aging. Front. Psychol. 13:806233. doi: 10.3389/fpsyg.2022.806233.
Scholten, M. R., Kelders, S. M., and Van Gemert-Pijnen, J. E. (2017). Self-guided web-based interventions: scoping review on user needs and the potential of embodied conversational agents to address them. J. Med. Int. Res. 19:e383. doi: 10.2196/jmir.7351
Schork, N. J. (2019). Artificial intelligence and personalized medicine precision medicine in cancer therapy. Cham: Springer, 265–283.
Seppälä, E. M., Simon-Thomas, E., Brown, S. L., et al. (2017). The oxford handbook of compassion science. New York, NY: Oxford University Press.
Serpa, J. G., Bourey, C. P., and Adjaoute, G. N. (2021). Mindful self-compassion (MSC) with veterans: a program evaluation. Mindfulness 12, 153–161. doi: 10.1007/s12671-020-01508-1
Shepherd, D., and Majchrzak, A. (2022). Machines augmenting entrepreneurs: opportunities (and threats) at the nexus of artificial intelligence and entrepreneurship. J. Bus. Vent. 37, 883–9026. doi: 10.1016/j.jbusvent.2022.106227
Sheppard, K. (2015). Compassion fatigue among registered nurses: connecting theory and research. Appl. Nurs. Res. ANR 28, 57–59. doi: 10.1016/j.apnr.2014.10.007
Shorey, S., Ang, E., Yap, J., Ng, E. D., Lau, S. T., and Chui, C. K. (2019). A virtual counseling application using artificial intelligence for communication skills training in nursing education: development study. J. Med. Int. Res. 21:e14658. doi: 10.2196/14658
Sikstrom, L., Maslej, M. M., Hui, K., Findlay, Z., Buchman, D. Z., and Hill, S. L. (2022). Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 29:e100459. doi: 10.1136/bmjhci-2021-100459
Sinclair, S., Kondejewski, J., Jaggi, P., Dennett, L., des Ordons, A. L., and Hack, T. F. (2021). What is the state of compassion education? A systematic review of compassion training in health care. Acad. Med. J. Assoc. Am. Med. Coll. 96, 1057–1070. doi: 10.1097/ACM.0000000000004114
Sinclair, S., McClement, S., Raffin-Bouchal, S., Hack, T. F., Hagen, N. A., McConnell, S., et al. (2016a). Compassion in health care: an empirical model. J. Pain Symp. Manage. 51, 193–203. doi: 10.1016/j.jpainsymman.2015.10.009
Sinclair, S., Norris, J. M., McConnell, S. J., Chochinov, H. M., Hack, T. F., Hagen, N. A., et al. (2016b). Compassion: a scoping review of the healthcare literature. BMC Palliat Care 15:6. doi: 10.1186/s12904-016-0080-0
Sinclair, S., Russell, L. B., Hack, T. F., Kondejewski, J., and Sawatzky, R. (2017). Measuring compassion in healthcare: a comprehensive and critical review. Patient. 10, 389–405. doi: 10.1007/s40271-016-0209-5
Singh, S. A., Moreland, R. A., Fang, W., Shaikh, P., Perez, J. M., Morris, A. M., et al. (2021). Compassion inequities and opioid use disorder: a matched case-control analysis examining inpatient management of cancer-related pain for patients with opioid use disorder. J. Pain Symp. Manage. 62, e156–e163. doi: 10.1016/j.jpainsymman.2021.05.002
Slater, P., Hasson, F., Gillen, P., Gallen, A., and Parlour, R. (2019). Virtual simulation training: imaged experience of dementia. Int. J. Older People Nurs. 14:e12243. doi: 10.1111/opn.12243
Slomian, J., Emonts, P., Vigneron, L., Acconcia, A., Reginster, J. Y., Oumourgh, M., et al. (2017). Meeting the needs of mothers during the postpartum period: using co-creation workshops to find technological solutions. JMIR Res. Protoc. 6:e76. doi: 10.2196/resprot.6831
Srivastava, T. K., and Waghmare, L. S. (2020). Implications of artificial intelligence (AI) on dynamics of medical education and care: a perspective. J. Clin. Diagn. Res. 2020:2249.
Stargatt, J., Bhar, S., Petrovich, T., Bhowmik, J., Sykes, D., and Burns, K. (2021). The effects of virtual reality-based education on empathy and understanding of the physical environment for dementia care workers in australia: a controlled study. J. Alzheimers Dis. 84, 1247–1257. doi: 10.3233/JAD-210723
Stein, N., and Brooks, K. (2017). A fully automated conversational artificial intelligence for weight loss: longitudinal observational study among overweight and obese adults. JMIR Diab. 2:e28. doi: 10.2196/diabetes.8590
Stenberg, J. H., Joutsenniemi, K., and Holi, M. (2015). Nettiterapiat - mitä tiedetään toimivuudesta [online therapies - what is known about their functionality]. Duodecim Laaketieteellinen Aikakauskirja 131, 1297–1301.
Stenhouse, R., Ion, R., Roxburgh, M., Devitt, P. F., and Smith, S. D. (2016). Exploring the compassion deficit debate. Nurse Educ. Today 39, 12–15. doi: 10.1016/j.nedt.2016.01.019
Sterkenburg, P. S., and Vacaru, V. S. (2018). The effectiveness of a serious game to enhance empathy for care workers for people with disabilities: a parallel randomized controlled trial. Disabil Health J. 11, 576–582. doi: 10.1016/j.dhjo.2018.03.003
Stevenson, M. C., Schaefer, C. T., and Ravipati, V. M. (2022). COVID-19 patient care predicts nurses’ parental burnout and child abuse: mediating effects of compassion fatigue. Child Abuse Neglect 130:105458. doi: 10.1016/j.chiabu.2021.105458
Straughair, C. (2019). Cultivating compassion in nursing: a grounded theory study to explore the perceptions of individuals who have experienced nursing care as patients. Nurse Educ. Pract. 35, 98–103. doi: 10.1016/j.nepr.2019.02.002
Strauss, C., Lever Taylor, B., Gu, J., Kuyken, W., Baer, R., Jones, F., et al. (2016). What is compassion and how can we measure it? A review of definitions and measures. Clin. Psychol. Rev. 47, 15–27. doi: 10.1016/j.cpr.2016.05.004
Strudwick, G., Impey, D., Torous, J., Krausz, R. M., and Wiljer, D. (2020). Advancing E-mental health in canada: report from a multistakeholder meeting. JMIR Ment. Health 7:e19360. doi: 10.2196/19360
Su, J. J., Paguio, J. T., Masika, G. M., Wang, M., and Redding, S. R. (2021). Learning compassionate care: experiences of nursing students. Nurse Educ. Pract. 53:103092. doi: 10.1016/j.nepr.2021.103092
Sukhera, J., and Poleksic, J. (2021). Adapting compassion education through technology-enhanced learning: an exploratory study. Acad. Med. 96, 1013–1020. doi: 10.1097/ACM.0000000000003915
Sundus, A., Younas, A., Fakhar, J., and Sughra, U. (2020). Pakistani nursing students’ perspectives of compassion: a convergent mixed methods study. J. Profes. Nurs. Offi. J. Am. Assoc. Coll. Nurs. 36, 698–706. doi: 10.1016/j.profnurs.2020.09.014
Sung, H. C., Su, H. F., Lee, W. L., Yamakawa, M., and Wang, H. M. (2022). Effects of a dementia virtual reality-based training with peer support for home care workers: a cluster randomized controlled trial. Int. J. Geriat. Psychiatry 37:5799. doi: 10.1002/gps.5799
Suresh, H., and Guttag, J. (2021). A Framework for understanding sources of harm throughout the machine learning life cycle. ArXiv [Preprint]. ArXiv:1901.10002.
Swendiman, R. A., Marcaccio, C. L., Han, J., Hoffman, D. I., Weiner, T. M., Nance, M. L., et al. (2019). Attitudes and habits of highly humanistic surgeons: a single-institution, mixed-methods study. Acad. Med. J. Assoc. Am. Med. Coll. 94, 1027–1032. doi: 10.1097/ACM.0000000000002690
Tanioka, T. (2019). Nursing and rehabilitative care of the elderly using humanoid robots. J. Med. Invest. JMI 66, 19–23. doi: 10.2152/jmi.66.19
Tanioka, T., Yasuhara, Y., Dino, M., Kai, Y., Locsin, R. C., and Schoenhofer, S. O. (2019). Disruptive engagements with technologies, robotics, and caring: advancing the transactive relationship theory of nursing. Nurs. Administr. Quart. 43, 313–321. doi: 10.1097/NAQ.0000000000000365
Tanioka, T., Yokotani, T., Tanioka, R., Betriana, F., Matsumoto, K., Locsin, R., et al. (2021). Development issues of healthcare robots: compassionate communication for older adults with dementia. Int. J. Environ. Res. Public Health 18:4538. doi: 10.3390/ijerph18094538
Tehranineshat, B., Rakhshan, M., Torabizadeh, C., and Fararouei, M. (2019). Compassionate care in healthcare systems: a systematic review. J. Natl. Med. Assoc. 111, 546–554. doi: 10.1016/j.jnma.2019.04.002
Terry, C., and Cain, J. (2016). The emerging issue of digital empathy. Am. J. Pharm. Educ. 80:58. doi: 10.5688/ajpe80458
Terry, L., Newham, R., Hahessy, S., Atherley, S., Babenko-Mould, Y., Evans, M., et al. (2017). A research-based mantra for compassionate caring. Nurse Educ. Today 58, 1–11. doi: 10.1016/j.nedt.2017.07.012
Thomas, P., and Hazif-Thomas, C. (2020). Souffrance compassionnelle et fatigue d’empathie [compassion fatigue and empathetic suffering]. soins. Gerontologie 25, 29–32. doi: 10.1016/j.sger.2020.01.008
Tiersen, F., Batey, P., Harrison, M., Naar, L., Serban, A. I., Daniels, S., et al. (2021). Smart home sensing and monitoring in households with dementia: user-centered design approach. JMIR Aging 4:e27047. doi: 10.2196/27047.
Tong, F., Lederman, R., D’Alfonso, S., Berry, K., and Bucci, S. (2022). Digital therapeutic alliance with fully automated mental health smartphone apps: a narrative review. Front. Psychiatry 13:819623. doi: 10.3389/fpsyt.2022.819623
Topol, E. (2019). Deep medicine: how artificial intelligence can make healthcare human again. New York: Basic Books.
Torous, J. B., Chan, S. R., Gipson, S. Y. M. T., Kim, J. W., Nguyen, T. Q., Luo, J., et al. (2018). A hierarchical framework for evaluation and informed decision making regarding smartphone apps for clinical care. Psychiatric Serv. 69, 498–500.
Torrence, C., Bhanu, A., Bertrand, J., Dye, C., Truong, K., and Madathil, K. C. (2022). Preparing future health care workers for interactions with people with dementia: a mixed methods study. Gerontol. Geriatr Educ. 2022, 1–20. doi: 10.1080/02701960.2022.2042805
Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K., Colquhoun, H., Levac, D., et al. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann. Int. Med. 169, 467–473. doi: 10.7326/M18-0850
Trzeciak, S., Roberts, B. W., and Mazzarelli, A. J. (2017). Compassionomics: hypothesis and experimental approach. Med. Hypoth. 107, 92–97. doi: 10.1016/j.mehy.2017.08.015
Tschandl, P. (2021). Risk of bias and error from data sets used for dermatologic artificial intelligence. JAMA Dermatol. 157, 1271–1273. doi: 10.1001/jamadermatol.2021.3128
Unjai, S., Forster, E. M., Mitchell, A. E., and Creedy, D. K. (2022). Compassion satisfaction, resilience and passion for work among nurses and physicians working in intensive care units: a mixed method systematic review. Int. Crit. Care Nurs. 71:103248. doi: 10.1016/j.iccn.2022.103248
Uygur, J., Brown, J. B., and Herbert, C. (2019). Understanding compassion in family medicine: a qualitative study. Br. J. Gen. Pract. 69, e208–e216. doi: 10.3399/bjgp19X701285
Valtolina, S., and Hu, L. (2021). “Charlie: a chatbot to improve the elderly quality of life and to make them more active to fight their sense of loneliness,” in Proceeding of the CHItaly 2021: 14th biannual conference of the italian SIGCHI chapter, doi: 10.1145/3464385.3464726
van der Lubbe, L. M., Groot, N., and Gerritsen, C. (2022). “Using topic modelling to personalise a digital self-compassion training,” in Pervasive computing technologies for healthcare - 15th eai international conference, pervasive health 2021, proceedings, eds H. Lewy and R. Barkan.
van Pelt, B., Nijman, S., van Haren, N., Veling, W., Pijnenborg, G., van Balkom, I., et al. (2022). Dynamic interactive social cognition training in virtual reality (DiSCoVR) for adults with autism spectrum disorder: a feasibility study. Res. Autism Spectr. Dis. 96:102003. doi: 10.1016/j.rasd.2022.102003
van Rijn, B., Cooper, M., Jackson, A. P., and Wild, C. (2017). Avatar-based therapy within prison settings: pilot evaluation. Br. J. Guid. Couns. 45, 268–283.
Verma, A. A., Murray, J., Greiner, R., Cohen, J. P., Shojania, K. G., Ghassemi, M., et al. (2021). Implementing machine learning in medicine. CMAJ Can. Med. Assoc. J. Assoc. Med. Can. 193, E1351–E1357. doi: 10.1503/cmaj.202434
Visram, S., Leyden, D., Annesley, O., Bappa, D., and Sebire, N. J. (2022). Engaging children and young people on the potential role of artificial intelligence in medicine. Pediatr. Res. [Epub ahead of print]. doi: 10.1038/s41390-022-02053-4
Walker, R. C., Tong, A., Howard, K., and Palmer, S. C. (2020). Clinicians’ experiences with remote patient monitoring in peritoneal dialysis: a semi-structured interview study. Peritoneal Dialy. Int. J. Int. Soc. Peritoneal Dialy. 40, 202–208. doi: 10.1177/0896860819887638
Walter, P., Saslow, L., and Saturn, S. (2015). Autonomic and prefrontal events during moral elevation. Biol. Psychol. 108:4. doi: 10.1016/j.biopsycho.2015.03.004
Wang, C. X. Y., Pavlova, A., Antonio, T. F., and Consedine, N. S. (2022). Beyond empathy decline: do the barriers to compassion change across medical training? Adv. Health Sci. Educ. 27, 521–536. doi: 10.1007/s10459-022-10100-2
Wartman, S. A. (2019). The empirical challenge of 21st-century medical education. Acad. Med. J. Assoc. Am. Med. Coll. 94, 1412–1415. doi: 10.1097/ACM.0000000000002866
Wartman, S. A., and Combs, C. D. (2019). Reimagining medical education in the age of AI. AMA J. Ethics 21, E146–E152. doi: 10.1001/amajethics.2019.146
Weil-Dubuc, P. L. (2019). Big data: amélioration technique, dégradation ou transformation du modèle de solidarité ? [big data: technical improvement, degradation or transformation of the solidarity model?]. Revue Epidemiol. Sante Publi. 67, S19–S23. doi: 10.1016/j.respe.2018.12.060
West, M., Bailey, S., and Williams, E. (2020). The courage of compassion. supporting nurses and midwives to deliver high-quality care. Available online at: https://www.kingsfund.org.uk/sites/default/files/2020-09/The%20courage%20of%20compassion%20summary_web_0.pdf (accessed December 14, 2022).
White, D., and Katsuno, H. (2019). Cultural anthropology for social emotion modeling: principles of application toward diversified social signal processing,” in Proceeding of the 8th international conference on affective computing and intelligent interaction workshops and demos (ACIIW), 368–373.
Wieseke, J., Geigenmüller, A., and Kraus, F. (2012). On the role of empathy in customer-employee interactions. J. Serv. Res. 15, 316–331.
Wiljer, D., Charow, R., Costin, H., Sequeira, L., Anderson, M., Strudwick, G., et al. (2019). Defining compassion in the digital health age: protocol for a scoping review. BMJ Open. 9:e026338. doi: 10.1136/bmjopen-2018-026338
Willems, E., Vermeulen, J., van Haastregt, J., and Zijlstra, G. (2021). Technologies to improve the participation of stroke patients in their home environment. Disabili. Rehabili. 2021, 1–11. doi: 10.1080/09638288.2021.1983041
Wilson-Howard, D., Vilaro, M. J., Neil, J. M., Cooks, E. J., Griffin, L. N., Ashley, T. T., et al. (2021). Development of a credible virtual clinician promoting colorectal cancer screening via telehealth apps for and by black men: qualitative study. JMIR Format. Res. 5:e28709. doi: 10.2196/28709
Wood, A. E., Prins, A., Bush, N. E., Hsia, J. F., Bourn, L. E., Earley, M. D., et al. (2017). Reduction of burnout in mental health care providers using the provider resilience mobile application. Commun. Ment Health J. 53, 452–459. doi: 10.1007/s10597-016-0076-5
Wu, K., Liu, C., Taylor, S., Atkins, P. W., and Calvo, R. A. (2017). “Automatic mimicry detection in medical consultations,” in Proceeding of the 2017 IEEE Life Sciences Conference (LSC), 55–58.
Wu, Y. J. A., Lan, Y. J., Huang, S. B. P., and Lin, Y. T. R. (2019). Enhancing medical students’ communicative skills in a 3D virtual world. Educ. Technol. Soc. 22, 18–32.
Xiao, B., Imel, Z. E., Georgiou, P. G., Atkins, D. C., and Narayanan, S. S. (2015). “Rate my therapist”: automated detection of empathy in drug and alcohol counseling via speech and language processing. PLoS One 10:e0143055. doi: 10.1371/journal.pone.0143055
Yaghy, A., Shields, J. A., and Shields, C. L. (2019). Representing communication, compassion, and competence in the era of AI. AMA J. Ethics 21, E1009–E1013. doi: 10.1001/amajethics.2019.1009
Yang, H. C., Rahmanti, A. R., Huang, C. W., and Li, Y. J. (2022). How can research on artificial empathy be enhanced by applying deepfakes? J. Med. Int. Res. 24:e29506. doi: 10.2196/29506
Yao, H., de Siqueira, A. G., Foster, A., Galynker, I., and Lok, B. C. (2020). “Toward automated evaluation of empathetic responses in virtual human interaction systems for mental health scenarios,” in Proceedings of the 20th ACM international conference on intelligent virtual agents.
Yokoo, K., Atsumi, M., Tanaka, K., Wang, H., and Meng, L. (2020). “Deep learning based emotion recognition iot system,” in Proceeding of the 2020 international conference on advanced mechatronic systems (ICAMechS), 203–207. doi: 10.1109/ICAMechS49982.2020.9310135
Yun, J. H., Lee, E., and Kim, D. H. (2021). Behavioral and neural evidence on consumer responses to human doctors and medical artificial intelligence. Psychol. Mark. 38, 610–625.
Zelmer, J., van Hoof, K., Notarianni, M., van Mierlo, T., Schellenberg, M., and Tannenbaum, C. (2018). An assessment framework for e-mental health apps in canada: results of a modified delphi process. JMIR mHealth uHealth 6:e10016. doi: 10.2196/10016
Zhang, Z., Citardi, D., Wang, D., Genc, Y., Shan, J., and Fan, X. (2021). Patients’ perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Inform. J. 27:14604582211011215.
Zheng, F., Zheng, Y., Liu, S., Yang, J., Xiao, W., Xiao, W., et al. (2022). The effect of m-health-based core stability exercise combined with self-compassion training for patients with nonspecific chronic low back pain: a randomized controlled pilot study. Pain Ther. 11, 511–528. doi: 10.1007/s40122-022-00358-0
Keywords: artificial intelligence (AI), compassion, compassionate healthcare, empathy, healthcare technology
Citation: Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M and Stockley R (2023) Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front. Psychol. 13:971044. doi: 10.3389/fpsyg.2022.971044
Received: 16 June 2022; Accepted: 05 December 2022;
Published: 17 January 2023.
Edited by:
Leon Sterling, Swinburne University of Technology, AustraliaReviewed by:
Jamin J. Day, The University of Newcastle, AustraliaLauri Lahti, Aalto University, Finland
Copyright © 2023 Morrow, Zidaru, Ross, Mason, Patel, Ream and Stockley. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Teodor Zidaru, t.m.zidaru-barbulescu@lse.ac.uk