- 1Philips Research, Eindhoven, Netherlands
- 2Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, Netherlands
- 3Jheronimus Academy of Data Science, ’s-Hertogenbosch, Netherlands
- 4College of Human Sciences, Bangor University, Bangor, United Kingdom
Clinical Decision Support (CDS) aims at helping physicians optimize their decisions. However, as each patient is unique in their characteristics and preferences, it is difficult to define the optimal outcome. Human physicians should retain autonomy over their decisions, to ensure that tradeoffs are made in a way that fits the unique patient. We tend to consider autonomy in the sense of not influencing decision-making. However, as CDS aims to improve decision-making, its very aim is to influence decision-making. We advocate for an alternative notion of autonomy as enabling the physician to make decisions in accordance with their professional goals and values and the goals and values of the patient. This perspective retains the role of autonomy as a gatekeeper for safeguarding other human values, while letting go of the idea that CDS should not influence the physician in any way. Rather than trying to refrain from incorporating human values into CDS, we should instead aim for a value-aware CDS that actively supports the physician in considering tradeoffs in human values. We suggest a conversational AI approach to enable the CDS to become value-aware and the use of story structures to help the user integrate facts and data-driven learnings provided by the CDS with their own value judgements in a natural way.
Introduction
This conceptual analysis article presents a rationale for the investigation of a value-aware Clinical Decision Support (CDS) system in the critical care environment. We argue that in order to respect autonomy, rather than refraining from making any value judgements, CDS in the critical care environment should support the critical care team to make decisions in line with their professional values and the personal values of the patient. We do so by bringing together a philosophical discussion on the meaning of autonomy with perspectives on the future of user interaction with AI, different ethical views on medical decision-making and the psychology of decision-making and envisioning how it could be applied to potential applications of AI-based CDS within the critical care domain.
The Added Value of Clinical Decision Support
Human judgement and decision-making, even by experts, demonstrates bias (e.g., Kahneman et al., 1974; Kahneman, 2011). In the psychology of human judgement and decision-making, the word bias is used to indicate that a judgement is not in accordance with the facts or that a decision is suboptimal in view of a utilitarian perspective that aims to optimize the expected outcome in quantifiable terms (e.g., to optimize survival or a cost-benefit analysis of the expected outcome of a treatment).
Evidence that such biases are present in medical decision-making regarding the diagnosis and treatment of patients is abundant (Saposnik et al., 2016). In the field of critical care, aspects such as time pressure, complex and scattered medical information and distribution of care over a team of care givers are likely to contribute to errors and biases in decision-making.
Computerized decision support can help to make better decisions as measured according to the utilitarian principle. When the outcome of a decision can be clearly quantified, as is the case with for example, monetary decisions, intelligent decision support based on data analytics can provide clear and actionable recommendations to help optimize the expected outcome.
The field of health care, where decisions often have great impact on human lives, also strives to make decisions that optimize outcomes. Evidence-based medicine (EBM) is a movement that promotes the use of data-driven learnings from clinical research and clinical practice to provide a more solid ground for judgements and decisions in empirical evaluation of past results (Klein et al., 2016). Using data analysis can help prevent errors in judgement of for example, the likelihood of a certain diagnosis and it can help to optimize outcomes of treatment decisions that are easily quantifiable across certain populations. CDS can help physicians to apply the results of such data analysis to their individual patients, whether it be in the form of giving predicted outcomes, or the likelihood of a diagnosis, by filtering and organizing data in a certain way or by translating the data into suggested actions based on sets of knowledge-based rules. For a more extensive overview of types of CDS systems and their potential benefits as well as the risk associated with their use in terms of their impact on human behavior, we refer to Sutton et al. (2020).
The Role of the Human Physician: Why Clinical Decision Support Should Respect Autonomy, but Might Not
At the same time, the movement of personalized medicine advocates for a closer evaluation of each patient’s unique individual case rather than optimizing outcomes on the basis of a population (Capurso, 2018). Falzer (2018) indeed notes that while rigorous analysis of data from the past can give us insight into the estimated effectiveness of a treatment, the practical expertise acquired by a physician still puts them in the best position to assess the appropriateness of the treatment for a specific (and often complex) individual patient with idiosyncrasies in their health or personal situation.
Compared to computer-based, intelligent decision support, a human physician is still better able to empathize with the patient and, preferably in conversation with the patient, to make tradeoffs among the multitude of outcomes associated with a treatment, including the treatment’s effectiveness, the impact of side effects and how the expected results of the treatment will further impact the patient’s life. Treatment of prostate cancer, for example, requires tradeoffs to be made with consideration for the patient’s personal situation. The survival rate of prostate cancer patients is quite high and there are multiple treatment options with different side effects. Drugs, surgery or radiation have different expected impact on post-treatment urinary, bowel and sexual function. The treatment and its side effects affect the patient’s further course of life, but also that of the patient’s family, especially the partner (De Vincentis et al., 2018). From a utilitarian perspective, it is very difficult to define the optimal treatment for a prostate cancer patient, as it is difficult to quantify the effects of the different side effects on their lives.
In the field of critical care there are also tradeoffs and ethical considerations to be made that are difficult to approach from a utilitarian perspective. These include if and when to stop treatment for patients with poor prognoses, how to deal with pain and pain medication, and how to weigh the risks against the benefits of invasive interventions in already fragile patients. These considerations should be made based on an analysis of facts and risks and computer-based, intelligent decision support can help provide a better understanding of these facts and risks. However, less easily measurable and quantifiable aspects such as physical and mental suffering should also play an important role in making these tradeoffs. Furthermore, treatment decisions impact not only the patient, but also their families and on a larger scale, tradeoffs need to be made on a societal level of how to distribute the resources required to care for critical patients.
As these considerations are less easily quantifiable, they are difficult to take into account for computer-based, intelligent decision support. Current view on the design of Clinical Decision Support (CDS) is therefore to refrain from making ethical considerations and to leave these up to the health care professional.
This means that CDS is typically designed in such a way that it does not take over the decision-making from the physician, but rather that it informs the physician and leaves the decision-making up to them. This is seen as leaving the physician in charge, in other words, such CDS respects human autonomy. The physician being in charge, or having autonomy, is then viewed as a way to ensure that other human values are taken into account appropriately in the decision-making. Autonomy is the gatekeeper for other human values.
We will argue here that simply “leaving the physician in charge” by requiring them to make the final decision does not necessarily mean that their autonomy is respected. And if autonomy is the gatekeeper for other human values, this may consequentially lead to violating other human values as well. Our argument starts with a careful analysis of the definition of autonomy, especially in the context of medical decision-making.
A Working Definition of Autonomy in the Context of Clinical Decision Support
Many definitions of autonomy include freedom from external control or influence. This however is a definition that is impossible to live up to, as we are continuously affected by our environment. We are not a “brain in a vat” (e.g., Putnam, 1981), our thinking does not occur in isolation from the outside world, it relies on inputs from our senses and it is influenced by how we see our actions change the world around us.
This continuous interplay of cognition with our body, our senses, our actions, and the world around us, is recognized in various theories of cognition, including embodied cognition, situated cognition and distributed cognition (e.g., Wilson, 2002). Each of these recognizes that thinking is distributed across the brain, the body and the external environment, yet as the names suggest, they may place more emphasis on whether it is distributed among the brain and the body, or they may investigate more closely the role of interaction with our external environment. Distributed cognition, for example, recognizes that cognitive activity does not occur solely inside a single human mind. Instead it is “ distributed across internal human minds, external cognitive artifacts, and groups of people, and […] across space and time.” (Saleem et al., 2009, p. 54, p. 54)
So according to our current understanding of cognition, it is impossible to be unaffected by external influences in our thoughts and decisions. This means that if we consider autonomy to be referring to freedom from external control or influence, it is something that is impossible to achieve.
The etymology of the word autonomy includes the Greek autos (self) and nomos (law) and has come to mean the self-government of a country or other group of people. In other words, it means to be able to live according to laws that are self-created. This definition may lead us to a more practical investigation of how we may design CDS such that it respects autonomy, because it leaves room for our decisions and actions to be affected by the environment and by others.
When we create laws, we aim to make them help us implement actions that will achieve results that we deem good or valuable. They are the implementation of our morals, of the set of abstract values that we want to live by, in concrete situations. For example, the law that forbids us to run a red light implements values such as refraining from physically harming ourselves and others and fairness with respect to who gets to cross the street first.
Friedman et al. (2013) provide a working definition that is useful in considering how we may design CDS such that it retains autonomy and that captures the relationship between our actions and our values: Autonomy is “[…] people’s ability to decide, plan, and act in ways that they believe will help them to achieve their goals” (Friedman et al., 2013, p. 18).
This definition retains the idea of autonomy as a gatekeeper for other human values, while leaving room for CDS to influence the decision-making.
Why Current Clinical Decision Support is Not Always Respecting Autonomy
The working definition of Friedman et al. (2013) can provide a fresh perspective on previous attempts to design CDS to retain autonomy. One such attempt is for CDS to provide recommendations, but to refrain from prescribing action by allowing to dismiss the recommendation. Such attempts have been shown to be ineffective in retaining the physician’s autonomy. Almeida Neto and Chen (2008) for example, discuss how recommendations aimed at improving the treatment safety and efficacy in some cases actually led to less safe and less effective treatment. Even if the physician is allowed to dismiss them, recommendations may be perceived as a threat to their freedom of choice or an attempt to control their behavior. This perception can lead them to counteract the recommendations in an attempt to retain a sense of control. This reaction is termed psychological reactance (Almeida Neto and Chen, 2008). Counterintuitively, while reactance is a response aimed at retaining a sense of control, it is in fact still resulting in the physician being influenced in their decision-making by the provided recommendations, except not in a way that improves the health and safety of the patient. De Almeida Neto and Chen demonstrate that even if a physician is left in charge of their own decisions by providing the option to dismiss recommendations provided by a CDS, such a CDS may be violating the physician’s autonomy, as it is inadvertently leading them to act in violation of their values to not harm the patient and to have their patient’s health at heart.
Other attempts to retain autonomy in the design of CDS have done so by refraining from giving any recommendations, but instead focusing solely on presenting information to the physician, leaving the physician in control to translate this information into a course of action. As the manner of presentation of information and the choice of which information to present can strongly influence people’s assessment of the situation, such attempts also harbor a potential to violate human autonomy. For example, McNeil et al. (1982) found that when physicians are asked their preference for surgery or radiation and the effectiveness of surgery was presented in terms of a 90% survival rate, most physicians preferred surgery. If on the other hand, the effectiveness of surgery was presented in terms of a 10% mortality rate, only half (50%) favored surgery. Framing of exactly the same information in a different manner, with different emotional connotations, led to physicians making different decisions. It is unlikely that these different decisions are motivated by differences in physician’s professional values between groups, so it can be argued that for at least one of the groups the physician’s autonomy was violated.
The work of Thaler and Sunstein (e.g., Thaler and Sunstein, 2009) provides evidence that the way in which alternative courses of action are presented strongly influences the decisions that people make. They discuss for example, the influence of defaults. While defaults leave the option to choose something else, they often result in a majority of people “choosing” the default. As an example, they mention that in countries where becoming an organ donor is an opt-out choice, the percentage of organ donors is much higher (e.g., Austria, 99%) than in countries where it is an opt-in (e.g., Germany, 12%). Health care decisions often also have default options. An example of a default that is especially relevant for the critical care domain is to seek active treatment, rather than to refrain from intervening. Patients who express a wish to not be resuscitated need to actively pursue a Do Not Resuscitate (DNR) directive. Having a DNR as a default would go against morality in most cultures and we are not arguing for it, but it is good to note that there are cases where the passive decision to not have a DNR can counter the patient’s wishes.
Overview
In this paper, we suggest a direction for future research into how to retain human autonomy by considering a different role for CDS: rather than refraining from ethical considerations, CDS should actively contribute to them. In line with the research of Verbeek (2017), we suggest that the morality of the decisions that we make when we use technology in our decision-making process resides not only in the human decision maker, but rather arises from the interaction between human and technology. In line with the view of distributed cognition, Verbeek recognizes that the way we act cannot be separated from the environment that we find ourselves in and therefore it cannot be seen separate from the technology that we use.
We therefore suggest to investigate how we can create a synergy among CDS and physician such that both actively strive for arriving at decisions that are in accordance with the physician’s professional values, the patient’s personal values and society’s values on how to provide the best level of care to a population of patients as well as to the individual patient.
This conceptual analysis aims to discover directions of future research for CDS in the critical care domain. This domain that can gain a lot from the use of CDS, as decisions in critical care are complex and time pressured and therefore liable to biases and errors in judgement. It is also a domain of interest for the investigation of human autonomy in decision-making, as there are ethical concerns that require the physician to retain autonomy, so that other human values are also respected.
We start the analysis therefore with a review of the most important facets involved in clinical decision-making in the critical care domain (Autonomy in Critical Care Decision-Making). In light of these facets, we discuss how to design CDS that can support decision-making in the critical care domain, while retaining autonomy through making it value-aware and using conversational AI and story structures (Implications for Autonomy Respecting Clinical Decision Support in Critical Care). Discussion provides a discussion and directions for future research. Conclusions concludes with an overview of the scientific contributions of this article.
Autonomy in Critical Care Decision-Making
Care for the patient in the Intensive Care Unit (ICU) is provided by a team of nurses and physicians. Nurses monitor the patients and take care of them 24/7. Intensivists and residents see their patients in daily rounds, at admission and when they are called to the bedside if the patient starts to deteriorate. The intensivist is ultimately responsible for the treatment of the patient, but they discuss their decisions with the resident and the nurse. The role of the intensivist, as well as the nature of team communications and the involvement of the patient and their relatives are discussed in more detail in The Intensivist, Communication and Teamwork, and Patient Centered Care, but first we will discuss the values and ethical considerations involved in critical care in Values.
As decision-making in critical care is complex and time pressured, part of it is protocolized to help prevent errors. The role of protocols in critical care is discussed in Protocols . The complexity of the decision-making arises partly because the ICU is a very information rich environment. Use of Information Sources provides an overview of the different sources of information and how they are used.
Values
Critical care decisions are complex not only because the ICU is an information rich environment and decisions are often time pressured, but also because critical care decisions (more so than decisions in other medical domains) touch on tradeoffs in human, professional and personal values of physicians, nurses, patients and their relatives.
Bucknall and Thomas (1997) mention for example, conflicts among medical staff regarding decisions of when to stop treatment of patients with poor prognoses, or how hard to push patients when weaning them from ventilation. There are considerations to be made when it comes to dealing with pain and pain medication and taking risks with invasive interventions such as surgery, but also invasive types of monitoring which increase for example, risk of infection, or the frequency of blood tests which provide valuable information, but also may contribute to lowering blood pressure and organ and tissue oxygenation in already fragile patients.
These medical ethical considerations can be approached from two perspectives: utilitarian and deontological (Mandal et al., 2016). Utilitarian ethics bases decisions on “the greatest amount of benefit obtained for the greatest number of individuals” (Mandal et al., 2016), while deontological ethics bases decisions on the morality of the act, irrespective of its consequences, as exemplified for example, in the Hippocratic oath to do no harm. Mandal et al. argue for a balance among these two types of ethics.
CDS in the form of data driven algorithms resulting in predictions of the likelihood of a diagnosis or future outcome (potentially translating these into suggested next actions) aims at supporting the utilitarian perspective, but such CDS does not provide an answer to questions for which the tradeoffs among the human values involved cannot be easily quantified.
Furthermore, in providing support from a utilitarian perspective, it is necessary to consider the unintended ways in which the filtering, organization and presentation of data can influence decision-making, e.g., by inducing reactance, by suggesting defaults or by framing the predictions in ways that have certain emotional connotations.
The Intensivist
At the ICU, the intensivist is ultimately responsible for the patient’s health. While most medical specialisms focus on diagnosis and treatment of a particular disease or condition, a limited range of conditions, or a certain organ or organ system, the intensivist has to be able to deal with a wide range of clinical conditions. This means that they cannot have the level of expertise of a specialist on every single one of these conditions, and they often need to consult other specialists. It also means that there is also a broad range of monitoring equipment, diagnostic tests and treatment options. An intensivist needs to be able to interpret information coming from all of this equipment and these tests.
The main goal in the ICU is to stabilize the patient, so that they can transfer to a general ward. The intensivist’s main goal therefore is to identify what is causing the patient to be unstable and to resolve it. Differential diagnosis in critically ill patients is a complex task. Severely ill patients often suffer from problems related to blood flow and breathing, which have different underlying causes but may manifest in similar ways and have cascading effects throughout the body.
The intensivist needs to be able to deal with these complex situations taking into account ethical and social implications of their decisions and actions while under time pressure and emotional stress. This complex combination of aspects can easily lead to a state of cognitive overload for the intensivist, as well as others involved in the decision-making.
Efforts at providing CDS in the intensive care are generally aimed at helping reduce cognitive overload by providing recommendations or by filtering and presenting information in a certain way. For example, Pickering et al. (2015) have designed and tested an alternative way to present information from the patient electronic medical record (EMR), organized around key patient centered concepts: cardiovascular past medical history, vitals, supportive therapies, investigations and interventions. Use of this system resulted in a reduction in time spent on gathering data during rounds visiting the patients and reduced mental effort, suggesting that the system helped to reduce potential cognitive overload.
While such different organization and filtering of information may be beneficial in reducing cognitive overload, developers of CDS should be aware of the fact that their methods for organizing and filtering have moral implications. A CDS that filters data does so according to certain rules that determine which information is more or less important. While we may be able to develop a CDS that filters data according to utilitarian perspective, there may be cases where the deontological morality of caring for the patient is not well served by a particular kind of filtering. For example, a CDS may be designed to support stabilization of a patient, while in certain cases, the physician needs to decide when to stop treatment and focus on reduction of suffering. Filtering and organizing information in accordance with the goal to stabilize the patient may delay the physician’s decision to stop treatment and may therefore violate their autonomy in the sense of acting in accordance with the deontological morality of reduction of suffering.
Communication and Teamwork
In health care, and especially in the ICU, taking care of the patient is a team effort, involving different types of experts such as nurses, intensivists, residents and junior physicians, therapists, anesthesiologists, and surgeons. They share a responsibility for the patient’s health and wellbeing and they therefore need to share information and communicate with each other regarding decisions and actions. A lot of their communications as well as divisions of responsibilities with respect to decisions and actions are to some extent structured.
For example, the SBAR communication format can be used for communication in handovers between shifts, during rounds when nurses update the physician about the patient’s status or in handovers from a surgical team to the ICU (Dunsford, 2009). SBAR stands for Situation, Background, Assessment, Recommendation. Descriptions of each of these should be short and to the point. The description of the situation includes identification of the speaker, the patient, and the problem. Background provides the relevant history including the reason for admission, medical status, and relevant medical history. Assessment includes information that is relevant to the problem, including vital signs and lab results and can include a provisional diagnosis. Recommendation provides the speakers suggestion for immediate action.
Such a template for communication helps the sending and receiving party in quickly encoding and decoding the message, irrespective of individual communication styles, and it serves as a checklist to make sure nothing is missed that should be communicated (Dunsford, 2009). It ensures effective and efficient transfer of information and it helps reduce errors due to missed information.
As transfer of information in handovers needs to be short, while the environment in the ICU is very rich in information, the sending party needs to make choices about what information to include in the communication. Physicians therefore indicate that a large part of the information transfer is implicit: what is not being said might be just as important as what is being said.
Besides information transfer, another challenge in the teamwork in critical care is that the acuity of the situations does not always allow for a clear separation of roles and responsibilities (Bucknall and Thomas, 1997). While the patient’s health is ultimately the responsibility of the physician, nurses are the ones who are at the patient’s bedside 24/7 and can therefore respond more quickly to acute critical situations. Nurses in critical care therefore generally exhibit greater autonomy than nurses in other care settings, which can lead to conflicts in care teams regarding what is the best care for the patient (Bucknall and Thomas, 1997). Especially relationships among experienced nurses and junior physicians can lead to conflicts with respect to responsibility for decisions and actions (Bucknall and Thomas, 1997).
The theoretical framework of distributed cognition can provide a means for identifying patterns in team communication and how they lead to certain decisions and actions, as Hazlehurst et al. have done for team communication in the heart room (Hazlehurst et al., 2007). Such a study can provide the starting point for an analysis of how physicians and nurses handle values in their communication and therefore how some form of CDS could support the team as well as its individual members to decide and act according to these values.
Patient Centered Care
Decisions in the ICU often need to be made quickly and while the patient is non-responsive. This makes involvement of the patient and their relatives difficult. As the patient may not be able to speak up, it is the responsibility of the care team to try to advocate for the patient’s values as much as possible. One way of doing so is provided by documentation of personal directives such as a DNR, but care teams may go beyond these directives, e.g., by observing pain responses of the patient and by talking to the patient (when possible) and their loves ones to gain understanding of personal characteristics of the patient (e.g., whether their general demeanor is more optimistic or more pessimistic, whether they are anxious to be in a hospital, scared of needles, etc.). While we do not see a role for CDS in the near future to automatically obtain such personal characteristics, there can be a role of CDS in supporting communication of such observations by individual members of the care team to each other.
Protocols
Expert decision-making in the clinical environment is often quite protocolized. Protocols can support physicians and nurses to prevent errors. Protocols provide a means for implementing Evidence-Based Medicine, through prescribing a process of care for which there is evidence that it is beneficial to most patients. Additionally, application of protocols helps generate further evidence, as they lead to large groups of patients being treated in the same way (Morris, 2003). Protocols, whether they are printed on paper or presented on a screen, can be considered a form of decision support as they prescribe the steps to be taken in the care process.
From a personalized medicine and deontological ethics perspective though, it remains important to leave room for deviation from protocol to serve the needs of the individual patient. Simply allowing the physician the freedom to deviate might not suffice to ensure autonomy. Time pressure and stress reduce the capacity for reflection. Introducing interventions that support following protocol might induce a state of reactance or conversely a state in which the physician becomes too reliant on the guidance of the protocol.
Use of Information Sources
The ICU is an information rich environment. Pickering et al. (2010) indicate a median of 1,348 data points being generated per patient per day. These data points reside in different information systems, including the Electronic Medical Record (EMR) which contains for example, lab results and patient history, the patient monitor which gives real-time information on vitals such as heart rate, blood pressure and breathing and infusion pumps which are used to infuse fluids, medication or nutrients into the patient’s circulatory system.
The patient him/herself is also often mentioned by physicians and nurses as an important source of information. Physicians or nurses look for discoloration of the skin, swelling, distention of veins and peripheral body temperature, they may examine the amount and color of the urine if a catheter is in place, and they observe the mental state of the patient (whether they are awake, confused, alert) (Cecconi et al., 2014). If the patient is awake and able to communicate, they can indicate symptoms such as location and quality of pain. If the patient is breathing independently, the breathing pattern and sounds can give important clues as to what is going on.
Furthermore, physicians and nurses actively investigate certain clinical signs. For example, pressing the bed of the fingernails and timing how long it takes for the pink color to return (capillary refill time), gives an indication of peripheral tissue oxygenation. Raising the legs and observing the effect this has on blood pressure (passive leg raise test) gives an indication of how well the patient is responding to fluid infusion.
The use of many different systems and sources may contribute to a risk of information overload. Efforts at reducing the information overload have been aimed at integrating information from different sources in one place (e.g., Pickering et al., 2010). It may however be the case that the physical separation of the data sources contributes to some extent to the physician’s mental model.
Here again, the framework of distributed cognition may provide a means for identifying patterns of interaction with different information elements and sources, to understand how grouping information and information sources impacts the physicians understanding of the situation and their decision-making. Looking at the future of user interaction, we should even investigate the effect of the use of different modalities of information exchange, including for example, verbal/audio interactions with systems or haptic and gesture based information exchange in addition to the current, dominantly visual presentation of information.
Implications for Autonomy Respecting Clinical Decision Support in Critical Care
In this section, we discuss how our suggestion to take a different perspective on autonomy in the context of CDS (described in Introduction) combined with our discussion of the relationship between the environment of critical care and autonomy (described in Autonomy in Critical Care Decision-Making) lead to our recommendations to make CDS in critical care value-aware (Making Clinical Decision Support Value-Aware), to use conversational AI to understand the user’s assessment of the situation and their goals and values (Using Conversational AI) and to use story structures to present and communicate information and decisions (Using Story Structures).
Making Clinical Decision Support Value-Aware
As the intensivist and the care team need to be able to consider a wide range of conditions, information sources and observations, there is a strong need for filtering and structuring of information. During their training, the different members of the care team learn to filter and structure information as they are acquiring it. Past experiences, learning and training help formation of mental models in the long-term memory, which can be applied to simplify and speed up situation assessment as well as formation of action plans and their evaluation (Dreyfus and Dreyfus, 1980; Rasmussen, 1983).
The more skilled the physician or nurse, the more easily they can form a mental image of what is going on with the patient. But there is also a downside to the development of skill. A more skilled physician or nurse is more susceptible to cognitive tunneling or inattentional blindness. The formation of a picture of what is going on is guided by observations of the environment and observations are guided by attention. Attention is partly a bottom-up process: salient cues in the environment grab the attention of the observer–and it is partly a top-down process: the goals and expectation of the observer guide what they are paying attention to (Johnson and Proctor, 2004). Cognitive tunneling or inattentional blindness is a state of mind where the gathering of new information is so much top-down focused, that important bottom-up cues may be missed, as demonstrated in the famous gorilla experiment (Simons and Chabris, 1999). As more skilled observers have more strongly formed mental models of their environment, they have stronger expectations and therefore rely more heavily on top down processing guided by those expectations (Hershler and Hochstein, 2009).
To reduce the cognitive load and to help the intensivist or nurse to understand the situation quickly, a CDS that helps filter and organize information should do so in a manner that is familiar to them, exploiting the mental models they learn during their training. But a CDS using these same mental models would not reduce susceptibility to cognitive tunneling. To reduce cognitive tunneling, a CDS should aim to present also the information that does not fit the user’s current mental model of the situation. This implies two things for the CDS. First, it should be aware of the user’s current understanding of the situation, so that it can determine which cues are likely to be missed. We will return to this point in Using Conversational AI.
Secondly, it should have some mechanism of filtering relevant information. Determining which cues are relevant is a matter of relating them to the user’s goals. In the case of physicians and nurses, we have already noted that in many cases, there may be no clear single goal. Depending on whether care for the patient is approached from a utilitarian or a deontological perspective, or a combination, there may be conflicts among goals. A CDS can play an important role in making these conflicts explicit and helping the user to make a balanced decision, but in order to do so, it should relate them to the user’s and the patient’s specific value profile.
To see why, we should take a look at theories of expert decision-making in time pressured situations. Recognition-primed decision-making (Klein, 1993) provides an account of how experts are able to make good decisions rapidly. Through developing skill and gathering experience in a certain domain, strong associative connections are formed that allow a chain of associations to be activated in the mind rapidly before they reach consciousness. Studies of decision-making under time pressure have shown that in rapid decision-making, the experience of the decision maker leads directly from awareness of the situation to an immediate course of action. The expert can mentally simulate this course of action to determine whether it is likely to work, or whether it needs adaptation.
Important to note here is that according to the theory of recognition-primed decision-making, identification of viable courses of action occurs serially. This means that once an identified course of action is deemed to be viable, no additional alternative courses of action will be generated and evaluated. While this makes sense in a time pressured situation, there may still be a role for CDS to help identify alternative courses of action that the user has not yet considered, but that are likely to be equally or more successful in achieving the user’s goals. Because the situation is time pressured, CDS should then identify only those courses of action that have a high likelihood to be preferred by the user over the course of action that they themselves have identified.
Another reason for making CDS aware of and responsive to the user’s values has already been mentioned in Why Current Clinical Decision Support is Not Always Respecting Autonomy: the way in which information and alternative courses of action are presented strongly influences the decisions that people make. One such decision could for example, be when to deviate from protocol. We have argued in Protocols that protocols can serve to support evidence-based medicine (the utilitarian perspective), while leaving room to deviate from protocol is necessary to serve a patient centered approach (the deontological perspective). Framing the presentation of the steps in a protocol in a certain way can make caregivers more likely to either follow protocol or to divert from it.
In considering how to frame certain information or suggested courses of action in line with the values of the care team and the patient, it is helpful to take a look at dual-processing theory. Dual-processing theory of cognition poses two modes of operating of the mind: System 1 and System 2.
System 1 thinking is subconscious, automatic, implicit, low effort, fast, high capacity, holistic, associative and domain-specific (Evans, 2008). As System 1 thinking is automatic, it is the default type of reasoning and we are often not aware of its operation. It is sometimes also referred to as intuition or gut feeling. As the reasoning that leads us to make decisions when using System 1 occurs subconscious and implicit, it can be difficult for decision-makers to explain the reasons for their decision.
System 2 thinking is conscious, deliberate, explicit, high effort, slow, low capacity, reflective, rule based and domain general (Evans, 2008). It is generally associated with truly “rational” thinking as it is more reflective, controlled and results in a chain of reasoning that is explicitly available to the mind and therefore easier to explain and defend (rationalize) towards others.
Though System 2 thinking is not free from error (Osman, 2004), System 1 thinking is more prone to biases, because as System 1 thinking is automatic and subconscious, we have less deliberate and reflective control over it (Kahneman, 2011). System 1 thinking is therefore more easily “fooled” into making decisions that don’t align with our personal values. For example, System 1 thinking may lead a physician in an acute situation to automatically act in accordance with the goal to save the patient’s life, potentially resulting in unnecessary suffering.
By thinking carefully about the manner in which information is presented, a value-aware CDS could ensure that the care team’s System 1 thinking serves its purpose of making good decisions under time pressure in complex situations by supporting a line of reasoning that is more likely to be in line with the professional values of the care team and the personal values of the patient (or at least by not inadvertently supporting a line of reasoning that is not in line with those values). For example, by using default options in line with the care team’s and patient’s values. A simple implementation could already be that a CDS that supports following a resuscitation protocol does allow for (easily) starting a resuscitation protocol on a patient who has a DNR, but does not start it by default.
Using Conversational AI
A value-aware CDS should be able to find out what values are relevant and deemed important in each decision, for each patient. In human interaction, this happens through dialogue or conversation. A natural first direction of exploration would therefore be to investigate the use of dialogue or conversation as a mode of interaction among caregivers and CDS. Conversation as a mode of interaction provides the CDS an opportunity to learn which values are being considered by the caregiver and it also allows for a collaborative construction of the story of what is going on with the patient and why that should lead to a certain decision.
Furthermore, such a conversational approach fits very well in a setting where teamwork and communication among team members already play such an important role. A CDS could learn a lot from “listening in” to and potentially “taking part in” the conversations among members of the care team. CDS could take part in conversations e.g., by using speech-to-text and NLP, exploiting the fact that many conversations of the care team are structured.
By taking part in these conversations in some manner, a CDS could build a model of the care team’s current understanding of the situation, so that it can determine which cues are likely to be missed. It could also understand if there are value conflicts among different members of the care team and help resolve them through making them explicit and/or linking them to values expressed previously by the patient and/or their relatives.
An advantage that a CDS can provide here over the support of a colleague is that CDS is not susceptible to groupthink: the concurrency seeking tendency of a group of people (Janis, 2008). Kaba et al. (2016) argue that indeed “certain group dynamics may increase the likelihood of poorer decisions and that this effect is ubiquitous.” While generally it is believed that good teamwork results from mutual trust and cohesiveness among the team members, groupthink is also more prominent in teams that exhibit these characteristics. Especially in a critical care environment, where the emotional pressure and time pressure are high, teams that are seemingly operating well may be very much susceptible to groupthink. As CDS is not susceptible to the same social pressures that cause groupthink, it is in an excellent position to play the role of devil’s advocate within the team.
Using Story Structures
Stories have been described as a means of sharing social significances (Scott et al., 2013). When members of the care team talk to a patient and their relatives, they share stories about who the patient is and what they find important, about what is happening to the patient right now and about what is expected to happen in the future. By talking to the patient and their relatives and by observing their behavior and responses, the care team can form a story of what the patient values beyond their pure medical needs.
Narrative medicine (Charon, 2001) describes a perspective on medicine where physicians pay specific attention to the story of the patient. They try to put themselves in their patient’s shoes and deploy empathy to ensure that they have a full understanding of the situation (a holistic patient view, including also aspects beyond their medical status, such as e.g., their general demeanor and their social context), so that they can make the best decision for each individual patient.
Such stories facilitate a deontological perspective on medical decision-making, whereas presentation of facts and numbers is more supportive of a utilitarian perspective. Additionally, using a story format can also support a better understanding of the facts and numbers presented by a CDS facilitating a utilitarian perspective, as shown by Gigerenzer and Edwards (2003). They demonstrate that patients as well as doctors make errors in interpretation of risk assessments, that could be prevented if the risks are presented in a manner that is more natural to how we experience the world, in other words if they are presented in a more story-like structure. For example, presenting a single event probability such as “You have a 30% chance of a side effect from this drug” as a natural frequency statement such as “Three out of every 10 patients have a side effect from this drug” fosters better insight (Gigerenzer and Edwards, 2003). The latter statement has a more story-like structure, as it refers to actual persons.
Narrative-Based Decision Theory (Beach, 2009) suggests that stories go even further than to simply facilitate a deontological perspective, helping us even to integrate the deontological with the utilitarian perspective. Narrative-Based Decision Theory is based on the observation that the formation of stories seems to closely resemble the introspective experience of thought as a storyline connecting the past to the present and projecting it into the future. The present (medical) state of the patient can be connected to a description of the patient’s past including not only their medical past, but also what the patient’s life was like before they entered the hospital as well as to the patient’s projected future inside and outside the hospital, giving individual meaning to evidence-based (symbolic/numerical) prognoses.
Additionally, story structures can support communication among team members as well as with patients and their relatives. Using templates for communication can lead to more effective information transfer (Dunsford, 2009). Stories lend themselves very well for template structures, as demonstrated by Joseph Campbell’s famous work on the structure of mythology (see e.g., Vogler, 2007). A template structure for stories leading to treatment decisions in the critical care setting may for example, support in keeping clear categorizations of observations, interpretations, value judgements and actions, while at the same time integrating them into a clear line of reasoning leading from observations to interpretations, combining them with value judgements to result in a choice of a preferred course of action.
Discussion
We have started this conceptual analysis by making the point that using a definition of autonomy that involves freedom from external influences does not make sense in the context of striving for respecting autonomy with the introduction of CDS. If CDS is not allowed to influence its user, then the introduction of decision support does not change anything and there is no point in providing it. Instead, we have proposed to use the working definition of autonomy by Friedman and Hendry (2019): being able to decide, plan and act in accordance with (personal or professional) goals and values. In line with the work of Verbeek (2017), we have suggested to investigate how rather than trying to refrain from making any value judgements, CDS can play a role in helping the physician to more explicitly consider their professional values as well as the patient’s personal values in their decision-making. Such an approach should support physicians in finding a balance between a utilitarian ethics approach and a deontological ethics approach to medical decision making, as advocated by Mandal et al. (2016).
We advocate for a value-aware CDS that is able to determine through conversational AI which cues, goals and courses of action are in line with what the physician is trying to achieve and to adapt its interaction with the physician to the values of the physician, patient and society. In figuring out how to determine the user’s goals and values through conversational AI, we can draw inspiration from theories of how we humans come to understand each other’s goals and values, such as theory of mind (e.g., Goldman, 2006). The field of information filtering can provide options as to how to filter what information is relevant with respect to certain goals and values. In addition to making CDS value-aware, we advocate for an exploration of the use of story structures in presenting information to the physician to facilitate an integration of the deontological and utilitarian perspective into medical decision-making as well as to support collaborative team decision-making, including also the patient’s perspective. The field of computational narrative intelligence can provide insight into how AI could construct such story structures (e.g., Riedl, 2016).
While the use of narratives has been shown to be beneficial for educational purposes and in the physician-patient interaction (Gray, 2009), its use in the team communication in a health care setting still needs to be explored further. The theoretical framework of distributed cognition can be used to investigate how certain story structures may already play a role in team decision-making in critical care to provide a starting point for developing template structures.
An important remaining question that needs to be answered if we are to explore these directions of future research is: How will we measure the success of such a value-aware, conversational and narrative CDS? In order to know whether autonomy was respected, we need to know what the professional values of the physician are, as well as the personal values of the patient and to balance them with society’s values. We need to define some measure of balance among the utilitarian and the deontological perspective in medical decision-making.
Conclusion
In this conceptual analysis, we have brought together philosophical, ethical and psychological perspectives on autonomy and applied these to the domain of medical decision-making in a critical care environment to investigate future directions of research into CDS. From this analysis, we have derived the conclusion that CDS should be value-aware and that its information representation could benefit from using story structures. We suggest the use of a conversational AI approach in order to enable the CDS to become value-aware and to facilitate a natural form of interaction with the physician.
A shift of perspective on the definition of autonomy radically changes the possibilities for the future of CDS, but it also instills strong moral responsibilities on the side of developers of CDS. This conceptual analysis has given a flavor of these possibilities and responsibilities. We believe that the future for autonomy-respecting CDS lies in paying particular attention to the way in which physician and CDS collaborate to reach a decision in line with their professional values as well as the interests and values of the patient.
Author Contributions
The literature search, shaping of the ideas and writing of the manuscript was done by MH. MW commented on successive drafts and provided final editing. FS and JH commented on successive drafts and contributed to discussions that led to the ideas stated in the article.
Conflict of Interest
Authors MH, FS, and JH were employed by the company Philips Research.
The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Acknowledgments
The authors would like to thank Aart van Halteren, Wijnand IJsselsteijn, Eric van de Laar and Iris Kosse for useful discussions and suggestions for literature to review.
References
Almeida Neto, A. C., and Chen, T. F. (2007). When Pharmacotherapeutic Recommendations May lead to the Reverse Effect on Physician Decision-Making. Pharm. World Sci. 30, 3–8. doi:10.1007/s11096-007-9143-x
Beach, L. R. (2009). Narrative Thinking and Decision Making: How the Stories We Tell Ourselves Shape Our Decisions, and Vice Versa. Available at: www.LeeRoyBeach.com. (Accessed 23 11, 2020).
Bucknall, T., and Thomas, S. (1997). Nurses' Reflections on Problems Associated with Decision-Making in Critical Care Settings. J. Adv. Nurs. 25, 229–237. doi:10.1046/j.1365-2648.1997.1997025229.x
Capurso, L. (2018). [Evidence-based Medicine vs Personalized medicine.]. Recenti Prog. Med. 109, 10–14. Available at: https://www.ncbi.nlm.nih.gov/pubmed/29451516. doi:10.1701/2848.28748
Cecconi, M., De Backer, D., Antonelli, M., Beale, R., Bakker, J., Hofer, C., et al. (2014). Consensus on Circulatory Shock and Hemodynamic Monitoring. Task Force of the European Society of Intensive Care Medicine. Intensive Care Med. 40, 1795–1815. doi:10.1007/s00134-014-3525-z
Charon, R. (2001). Narrative Medicine: A Model for Empathy, Reflection, Profession and Trust. patient-physician Relatsh. 286, 1897–1902. doi:10.1001/jama.286.15.1897
De Vincentis, G., Monari, F., Baldari, S., Salgarello, M., Frantellizzi, V., Salvi, E., et al. (2018). Narrative Medicine in Metastatic Prostate Cancer Reveals Ways to Improve Patient Awareness & Quality of Care. Future Oncol. 14, 2821–2832. doi:10.2217/fon-2018-0318
Dreyfus, E. S., and Dreyfus, L. H. (1980). A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition. Berkeley: Operations Research Center, University of California. doi:10.21236/ada084551
Dunsford, J. (2009). Structured Communication: Improving Patient Safety with SBAR. Nurs. Women's Health. 13, 384–390. doi:10.1111/j.1751-486x.2009.01456.x
Evans, J. S. B. T. (2008). Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition. Annu. Rev. Psychol. 59, 255–278. doi:10.1146/annurev.psych.59.103006.093629
Falzer, P. R. (2018). Naturalistic Decision Making and the Practice of Health Care. J. Cogn. Eng. Decis. Making. 12, 178–193. doi:10.1177/1555343418773915
Friedman, B., and Hendry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA: MIT Press. doi:10.7551/mitpress/7585.001.0001
Friedman, B., Kahn, P. H., Borning, A., and Huldtgren, A. (2013). Value Sensitive Design and Information Systems. Early Engagem. New Technol. Open. Up Lab., 55–95. doi:10.1007/978-94-007-7844-3_4
Gigerenzer, G., and Edwards, A. (2003). Simple Tools for Understanding Risks: From Innumeracy to Insight. Bmj 327, 741–744. doi:10.1136/bmj.327.7417.741
Goldman, A. I. (2006). Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. New York, NY: Oxford University Press on Demand.
Gray, J. B. (2009). The Power of Storytelling: Using Narrative in the Healthcare Context. J. Commun. Healthc. 2, 258–273. doi:10.1179/cih.2009.2.3.258
Hazlehurst, B., McMullen, C. K., and Gorman, P. N. (2007). Distributed Cognition in the Heart Room: How Situation Awareness Arises from Coordinated Communications during Cardiac Surgery. J. Biomed. Inform. 40, 539–551. doi:10.1016/j.jbi.2007.02.001
Hershler, O., and Hochstein, S. (2009). The Importance of Being Expert: Top-Down Attentional Control in Visual Search with Photographs. Attention, Perception & Psychophysics. 71, 1478–1486. doi:10.3758/APP10.3758/app.71.7.1478
Janis, I. L. (2008). Groupthink. IEEE Eng. Manag. Rev. 5, 36. Available at: https://williamwolff.org/wp-content/uploads/2016/01/griffin-groupthink-challenger.pdf. doi:10.1109/emr.2008.4490137
Johnson, A., and Proctor, R. W. (2004). Attention: Theory and Practice. California: Sage. doi:10.4135/9781483328768
Kaba, A., Wishart, I., Fraser, K., Coderre, S., and Mclaughlin, K. (2016). Are We at Risk of Groupthink in Our Approach to Teamwork Interventions in Health Care?. Med. Educ. 50, 400–408. doi:10.1111/medu.12943
Kahneman, D., Slovic, P., and Tver, A. (1974). Judgment under Uncertainty: Heuristics and Biases. Cambridge, United Kingdom: Cambridge University Press.
Klein, D. E., Woods, D. D., Klein, G., and Perry, S. J. (2016). Can We Trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches. J. Cogn. Eng. Decis. Making. 10, 244–254. doi:10.1177/1555343416637520
Klein, G. (1993). A Recognition-Primed Decision (RPD) Model of Rapid Decision Making. Decis. Mak. Action. Model. Methods. 5 (4), 139–147. doi:10.1002/bdm.3960080307
Mandal, J., Ponnambath, D., and Parija, S. (2016). Utilitarian and Deontological Ethics in Medicine. Trop. Parasitol. 6, 5. doi:10.4103/2229-5070.175024
McNeil, B. J., Pauker, S. G., Sox, H. C., and Tversky, A. (1982). On the Elicitation of Preferences for Alternative Therapies. N. Engl. J. Med. 306, 1259–1262. doi:10.1056/NEJM198205273062103
Morris, A. H. (2003). Treatment Algorithms and Protocolized Care. Curr. Opin. Crit. Care. 9, 236–240. doi:10.1097/00075198-200306000-00012
Osman, M. (2004). An Evaluation of Dual-Process Theories of Reasoning. Psychon. Bull. Rev. 11, 988–1010. doi:10.3758/BF03196730
Pickering, B. W., Herasevich, V., Ahmed, A., and Gajic, O. (2010). Novel Representation of Clinical Information in the ICU: Developing User Interfaces Which Reduce Information Overload. Appl. Clin. Inform. 1, 116–131. doi:10.4338/ACI-2009-12-CR-0027
Pickering, B. W., Dong, Y., Ahmed, A., Giri, J., Kilickaya, O., Gupta, A., et al. (2015). The Implementation of Clinician Designed, Human-Centered Electronic Medical Record Viewer in the Intensive Care Unit: A Pilot Step-Wedge Cluster Randomized Trial. Int. J. Med. Inform. 84, 299–307. doi:10.1016/j.ijmedinf.2015.01.017
Putnam, H. (1981). “Brains in a Vat. Kap. 1, S. 1–21,” in Reason, Truth, and History (Cambridge: Cambridge University Press), 1–21.
Rasmussen, J. (1983). Skills, Rules, and Knowledge; Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models. IEEE Trans. Syst. Man. Cybern. 13, 257–266. Available at: http://www.carlosrighi.com.br/177/Ergonomia/Skills rules and knowledge - Rasmussen seg.pdf. doi:10.1109/tsmc.1983.6313160
Riedl, M. O. (2016). Computational Narrative Intelligence: A Human-Centered Goal for Artificial Intelligence. Available at: http://arxiv.org/abs/1602.06484. (Accessed 08 06, 2021).
Saleem, J. J., Russ, A. L., Sanderson, P., Johnson, T. R., Zhang, J., and Sittig, D. F. (2009). Current Challenges and Opportunities for Better Integration of Human Factors Research with Development of Clinical Information Systems. Yearb. Med. Inform. 18, 48–58. Available at: http://www.ncbi.nlm.nih.gov/pubmed/19855872. doi:10.1055/s-0038-1638638
Saposnik, G., Redelmeier, D., Ruff, C. C., and Tobler, P. N. (2016). Cognitive Biases Associated with Medical Decisions: a Systematic Review. BMC Med. Inform. Decis. Mak. 16, 1–14. doi:10.1186/s12911-016-0377-1
Scott, S. D., Brett-MacLean, P., Archibald, M., and Hartling, L. (2013). Protocol for a Systematic Review of the Use of Narrative Storytelling and Visual-Arts-Based Approaches as Knowledge Translation Tools in Healthcare. Syst. Rev. 2, 19. doi:10.1186/2046-4053-2-19
Simons, D. J., and Chabris, C. F. (1999). Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events. Perception 28, 1059–1074. doi:10.1068/p2952
Sutton, R. T., Pincock, D., Baumgart, D. C., Sadowski, D. C., Fedorak, R. N., and Kroeker, K. I. (2020). An Overview of Clinical Decision Support Systems: Benefits, Risks, and Strategies for success. Npj Digit. Med. 3, 1–10. doi:10.1038/s41746-020-0221-y
Thaler, R. H., and Sunstein, C. R. (2009). Nudge: Improving Decisions about Health, Wealth, and Happiness. London, United Kingdom: Penguin.
Verbeek, P.-P. (2017). Designing the Morality of Things: The Ethics of Behaviour-Guiding Technology. Des. Ethics., 78–94. doi:10.1017/9780511844317.005
Keywords: autonomy, clinical decision support, evidence based medicine, personalized medicine, human decision making, critical care, conversational AI, narratives
Citation: Hendriks M, Willemsen MC, Sartor F and Hoonhout J (2021) Respecting Human Autonomy in Critical Care Clinical Decision Support. Front. Comput. Sci. 3:690576. doi: 10.3389/fcomp.2021.690576
Received: 03 April 2021; Accepted: 21 July 2021;
Published: 16 August 2021.
Edited by:
Kaisa Väänänen, Tampere University, FinlandReviewed by:
Walter Gerbino, University of Trieste, ItalyVerónica Violant-Holz, University of Barcelona, Spain
Copyright © 2021 Hendriks, Willemsen, Sartor and Hoonhout. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Monique Hendriks, monique.hendriks@philips.com