Skip to main content

HYPOTHESIS AND THEORY article

Front. Psychol., 30 August 2024
Sec. Personality and Social Psychology
This article is part of the Research Topic Motivation-based Approaches to Countering Mass-Mediated Misinformation View all 7 articles

Processing of misinformation as motivational and cognitive biases

  • 1Department of Communication, University at Buffalo, Buffalo, NY, United States
  • 2Department of Communication Arts and Sciences, Pennsylvania State University, University Park, PA, United States

Misinformation can be broadly defined as false or inaccurate information created and spread with or without clear intent to cause harm. It travels fast and deep and persists despite debunking. It is well-documented that corrective messages and fact-checking efforts often fail to mitigate the effects or persistence of misinformation. In this article, we examine the persistence of misinformation as rooted in motivational and cognitive biases in information processing. While drawing on the frameworks of motivations that drive information seeking, sharing, and processing and various cognitive biases, we explicate mechanisms and processes that underlie the impact and persistence of misinformation. We conclude our article by discussing the potential utility of psychological inoculation as a prebunking strategy.

Introduction

Misinformation, in its broadest sense, encompasses “any information that is demonstrably false or otherwise misleading, regardless of its source or intention” (van der Linden et al., 2023, p. 7). The defining characteristic of misinformation lies in its (lack of) truth value: Information that is inaccurate, incomplete, misleading, or false information as well as selective and half-truths. However, the intention remains an important dimension. Consider a white lie, information that is untrue, maybe misleading, but with a benign intention. It is rather safe to assume that white lies are not viewed as misinformation. Amid ongoing debates on the most appropriate criterion against which the true value or ground truth of a piece of information should be assessed, several key benchmarks emerge, including evidence or facts, expert opinion, and characteristics of deceptive or manipulative techniques (Nan et al., 2023; Roozenbeek et al., 2022; Vraga and Bode, 2020). While some distinguish misinformation and disinformation, with the latter being purposefully created and shared to deceive and cause harm, we consider disinformation as a subset of misinformation, recognizing the inherent challenge of detecting intent in many cases (Treen et al., 2020).

The widespread dissemination of misinformation poses one of the greatest challenges facing contemporary human society, which is exacerbated by the high-choice media and social media environment. While misinformation is not new and its history dates back to the early 15th century (Soll, 2016), its increasingly evident and severe ramifications across a wide array of public domains, including science, health, and politics, among others, have gained the phenomenon unprecedented attention in recent decades, particularly post-2016, the year when “post-truth” was selected as the word of the year by the Oxford dictionary. Indeed, misinformation has been documented to undermine electoral processes (Berlinski et al., 2023), diminish support for proclimate policies (Treen et al., 2020), fuel vaccination hesitancy (Garett and Young, 2021), and discourage the enactment of preventive and safety behaviors during the coronavirus diseases 2019 (COVID-19) pandemic (Greene and Murphy, 2021). Beyond mere facts, a pressing concern is understanding how and why misinformation influences individuals’ decision-making and yields these detrimental outcomes. Over the years, there has been a shift from a paradigm heavily emphasizing information deficits, which attributes the impact of misinformation to a lack of understanding and knowledge about facts, to a more nuanced framework recognizing that ignorance alone lacks explanatory power (Ecker et al., 2022). This transition acknowledges the mounting evidence that people persist in making decisions based on misinformation despite retractions and corrective efforts (Seifert, 2014; Thorson, 2016)—some parents’ belief in the measles, mumps, and rubella (MMR) vaccines–autism link persists long after the infamous 1998 Lancet article was retracted (in 2010) and debunked (in 2011) is a good example.

Misinformation can occur and spread without any prior position, particularly when it is due to a lack of knowledge (e.g., even though it was established in the third century by Aristarchus and Eratosthenes that the earth was round, the idea was not widely accepted until around the 15th century; Uri, 2020). It is not a coincidence, however, that misinformation is most rampant on issues that are controversial and politically charged (e.g., climate change, the 2020 election, COVID-19, etc.), given the polarization and dividedness of society in the post-2016 United States. This suggests that it is most likely that individuals already have formed their opinions and/or developed orientations toward the specific issues before misinformation is generated and spread. In this article, we take the perspective of biased and motivated information processing to understand and explain the impact and persistence of misinformation. Specifically, we examine responses to scientific evidence or corrective messages as a special case of the processing of counterattitudinal information driven by multiple motivations and cognitive fallacies. Conversely, misinformation as attitude-consistent messages undergoes similarly biased processing. It should be noted that biased and motivated processing of scientific evidence and facts by individuals with the correct positions do not have any other undesirable consequences (e.g., Shen et al., 2009), although they equally contribute to the polarization and dividedness of society as those to whom misinformation is attitude-consistent. There is evidence that greater partisan divisions in social reality (Nyhan, 2021) and a stronger desire for shared reality (Jost et al., 2018) lead to increased susceptibility to misinformation and willingness to share/spread misinformation. Hence, we have Proposition 1:

Proposition 1: Misinformation is more prevalent, influential, and persistent on topics/issues that are controversial and politically charged than neutral or non-divisive ones.

Admittedly, misinformation often has attention-grabbing features that make it attractive and easy to process. For example, compared to factual news, misinformation tends to be more emotional and less lexically diverse and demonstrates greater readability (Carrasco-Farré, 2022). It is often high in sensationalism (Staender et al., 2022) and contains negative or threatening content, social cues, and the presence of celebrities (Acerbi, 2019). In addition, misinformation frequently employs manipulative techniques. A systematic review identified the use of logical fallacies and misrepresentations, cherry-picking, fake experts, impossible expectations, and conspiracy theories as common message features shared by most misinformation (Schmid et al., 2023). Roozenbeek et al. (2022) outlined five epistemologically dubious strategies commonly observed in online misinformation, including (1) the use of emotionally charged language that evokes fear, anger, and other negative emotions, (2) the presentation of incoherent or mutually exclusive arguments, (3) the framing of issues in false dichotomies, (4) the scapegoating of individuals or groups to reduce the complexity of a problem, and (5) the resort to ad hominem attacks that target the speaker rather than their arguments. Undoubtedly, these features of misinformation are conducive to the spread and persistence of misinformation (Kemp et al., 2022; Putman et al., 2017). Yet, cognitive theories suggest that it is primarily through individuals’ cognitive processing that misinformation persists while correction fails. That is, these message features feed into the motivational and cognitive biases in information responses, especially when individuals already hold a preexisting belief in misinformation, which then shapes the outcomes of the messages. Recognizing the influence of these message features, we now turn our discussion to the motivational and cognitive biases that underpin the persistence of misinformation.

Motivational and cognitive mechanisms

Multiple motivational and cognitive mechanisms may contribute to how people engage with misinformation and corrective information (Desai et al., 2020; Kunda, 1990). We cluster these mechanisms into three groups. The first one consists of more deliberate motivations. These motivations concern goals that individuals actively or consciously pursue to achieve a desired state. The second category includes more automatic motivations. It differs from the first one in that these goals tend to be triggered automatically and the goal pursuit might be more or less unconscious. The third class of mechanisms centers on cognitive fallacies, which refer to a type of information processing that reduces accuracy and results in erroneous and/or irrational conclusions. Cognitive fallacies are distinct from the motivations and can occur absent any motivations.

Deliberate motivations

Humans operate on three fundamental motivations: value, control, and truth (Cornwell and Higgins, 2018). These motivations are essential to how people seek effectiveness (Higgins, 2014). Value motivation is concerned with having desired results (i.e., expected utility), especially as they relate to avoiding pain and cost and gaining pleasure and benefits. Control motivation involves making things happen and managing what happens and how it happens. Truth motivation pertains to the establishment of what is real vs. imaginary and what is right vs. wrong (Cornwell and Higgins, 2018). All three motivations suggest that individuals are not intrinsically motivated by accuracy as the dual process models (Chaiken et al., 1989; Petty and Cacioppo, 1986) suggest they might: Even the truth motivation involves subjective and value-laden judgments (i.e., what is right vs. wrong); although conceivably, the motivations of value and control may play a particularly crucial role in the creation, spread, and persistence of misinformation. Individuals may generate, biasedly process, and disseminate misinformation and avoid interacting with others who disagree with them to secure desired outcomes and exert control. This could be especially true among those who already hold misinformed beliefs. By perpetuating misinformation and resisting corrective efforts, they fulfill their goals of preserving a favorable status quo, avoiding unwanted threats, and potentially enhancing their influence and gaining dominance (Cornwell et al., 2014). The subjectivity in, or worse yet, the lack of, truth motivations further explains why corrective information presenting facts and evidence often fails to persuade—accuracy and validity do not necessarily satisfy the truth motivation (i.e., when they are evaluated along the dimension of right vs. wrong). Facts and evidence tend to be ignored, and their sources are rejected when they do not render the value (i.e., expected utility) or control (cf. Festinger, 1957).

In line with Higgins and colleagues’ tripartite framework of motivation, Sharot and Sunstein (2020) identified instrumental (action), hedonic (affect), and cognitive (cognition) utilities as three motivations that guide people’s informational behavior. People are motivated to acquire and accept information they perceive as facilitating decision-making and actions that maximize rewards and minimize harm, inducing positive emotions evading negative ones, and enhancing their ability to understand and predict realities. Instrumental and hedonic utilities operate in similar ways as control and value motivations: The pursuit of action-facilitative knowledge and positive affect can drive individuals to adopt misinformation and resist corrective information. Indeed, people manage what information to seek, accept, and believe in as an emotion regulation strategy (Heinström et al., 2022; Vrinten et al., 2022). Cancer patients may be motivated to endorse treatment-related misinformation due to the hope it gives. When driven by hope, people experience reduced message fatigue (Shen et al., 2022), displaying increased openness to (mis)information they consider useful. Considerations of cognitive utility—more specifically, the motivation to minimize the gap between one’s mental representations and external reality and thus have a secured sense of comprehension—may lead to avoidance of information that threatens existing mental models (Sharot and Sunstein, 2020). Consequently, corrective information tends to be assigned negative cognitive values whereas misinformation is considered to have positive cognitive values to satisfy the need to align internal cognitions with (distorted) external realities.

Despite the differences, the more recent theorizations on motivations are not incompatible with the dual process models. The heuristic and systematic processing model (HSM) propose people may be driven by accuracy, defense, and impression motivations (Chaiken et al., 1996), which are, respectively, aligned with outcome-, value-, and impression-relevant involvement (Johnson and Eagly, 1989). Accuracy motivation is the desire to arrive at valid attitudes or beliefs that correspond with reality. It is closely associated with outcome-relevant involvement where one’s attitudes are primarily concerned with direct personal consequences and concrete utility. Accuracy motivation promotes in-depth and careful processing of information that allows one to reach accurate conclusions. Defense motivation refers to the desire to hold attitudes or beliefs that are consistent with self-definition. Associated with value-relevant involvement where a person’s attitude is primarily linked to one’s self-identity and values, defense motivation drives people to process information in ways that preserve their self-definition and -concept. Impression motivation is the desire to express attitudes and beliefs that satisfy interpersonal and social goals. It corresponds with impression-relevant involvement that highlights the self-presentational and social–relational consequences of one’s expressed attitudes. When impression-motivated, people engage in processing strategies that yield conclusions that enable social acceptance. In the case of misinformation, motivations other than accuracy presumably underlie individuals’ processing. As individuals form a preexisting belief in misinformation, particularly on highly controversial or polarized issues, their position becomes closely tied to their personal values, identities, and social belongingness, which—when confronted with correction—triggers defense and/or impression motivations and thereby leads to biased processing that allows them to arrive at conclusions that favor their existing misconceptions (Jost et al., 2022; Trevors, 2019). Accuracy motivation, an assumption central to corrective messages presenting factual evidence, on the contrary, is absent or overshadowed, limiting the utility of corrective efforts. Hence, we have Proposition 2:

Proposition 2: The impact and persistence of misinformation are positively associated with non-accuracy motivations.

Automatic motivations

Cognitive consistency theories, such as balance theory (Heider, 1946), congruity theory (Osgood and Tannenbaum, 1955), and cognitive dissonance theory (Festinger, 1957), share a common assumption that people are driven to maintain consistency among elements of their cognitions (i.e., units of information). The goal of restoring consistency tends to be evoked by the psychological discomfort resulting from inconsistency and may occur beneath conscious awareness without deliberate intent. Balance theory, for example, postulates that balance—a psychologically pleasant, desirable state—is achieved when all cognitive elements are internally consistent. More specifically, often studied within the P-O-X framework where P refers to the person/self, O refers to the other person, and X refers to the object, balance is described as the state in which a person (dis)agrees with another person they (dis)like on the attitudinal object (Heider, 1946). When in an imbalanced state, people are motivated to change one or more relationships in the triad. Congruity theory shares similar propositions as balance theory. However, it is more precise than balance theory in that it quantifies the degree of liking of the other person and one’s attitude toward the object (Osgood and Tannenbaum, 1955). As such, congruity theory allows a more accurate prediction.

Cognitive dissonance theory differentiates three types of relationships between cognitive elements: irrelevant, consonant, and dissonant (Festinger, 1957). As the ratio of dissonant to consonant cognitions and the importance of dissonant elements increase, individuals experience greater cognitive dissonance and greater motivation to reduce it. Correspondingly, the reduction can be achieved in ways such as adding new consonant cognitions, removing dissonant cognitions, and changing the importance of consonant and/or dissonant cognitions. These cognitive consistency theories suggest that when there is a discrepancy between the advocacies of the message and individuals’ preexisting positions, people are automatically motivated to close that gap, which likely leads to message resistance and rejection. This means that when a belief in misinformation is established, any corrective information threatens the preferred state of consistency and arouses motivations to dismiss, distort, or deny it. People may also seek and internalize information that reinforces their existing misconceptions. Within the framework of cognitive dissonance, individuals who already believe in misinformation might still be open to facts and evidence, that is, when there are external justifications that reduce cognitive dissonance. However, even when that happens, the impact of facts and evidence is most likely in the form of compliance instead of identification or internalization (Kelman, 1958). Attitude change in the form of compliance tends to be flimsy and short in duration, which explains why fact-checking might have a short-term effect in mitigating misinformation but is positively associated with the persistence of misinformation (Chan et al., 2017).

A closely related theory to cognitive consistency theories is the self-affirmation theory (Steele, 1988). Its distinguishing feature lies in its premise that people are primarily driven to maintain self-integrity (rather than cognitive consistency). Self-integrity is the perception of the self “as adaptively and morally adequate, that is, as competent, good, coherent, unitary, stable, capable of free choice, capable of controlling important outcomes, and so on” (Steele, 1988, p. 262). When encountering information that threatens one’s perceived self-integrity, people are motivated to restore it. Although accepting the information and changing one’s attitude is an option, it is often unlikely due to the threat it poses to the fundamental aspects of one’s identity. Rather, people are more inclined to engage in defensive responses, which can be automatic in nature, to avoid or dismiss the threat (Sherman and Cohen, 2006). This process clearly sheds light on why beliefs in misinformation tend to be resistant to correction. However, it is worth noting that divergent from cognitive consistency theories, self-affirmation theory proposes a third alternative adaptation mechanism, which is to self-affirm a different domain of identity that is not necessarily related to the threat (Steele, 1988). Putting this differently, when self-integrity is bolstered through self-affirming some alternative aspects of the self, people may find cognitive inconsistency tolerable and respond more openly and objectively to threatening information (Sherman and Cohen, 2006; Steele and Spencer, 1992). Nevertheless, this seemingly promising tenet may encounter challenges in practice given the polarization of society across various interconnected issues and the intricate intertwining of important domains of individuals’ identities, which often align with one another. This can undermine the applicability of this adaptation, giving way to defensive responses.

Social judgment theory (Sherif et al., 1965) is a third relevant framework in this cluster. The theory posits that attitude change is a function of one’s judgment of the position advocated by a message in relation to their existing position. Specifically, depending on where an individual’s initial position is located on a continuum of all potential alternative positions, one might evaluate a message’s advocacy as within their latitude of acceptance (i.e., positions that one considers acceptable), rejection (i.e., positions that one considers unacceptable), or non-commitment (i.e., positions that one considers neither acceptable nor unacceptable). Attitude change is likely to be greatest when a message’s advocacy is most distant from one’s existing position but does not cross into the latitude of rejection. When a position falls within the latitude rejection, conversely, persuasion is unlikely and may even cause backfire. Corrective messages as counterattitudinal information reside in the latitude of rejection, eliciting automatic resistance against the message and subsequently the observed ineffectiveness. Moreover, social judgment theory suggests that the size of the latitude varies based on the level of ego involvement. As the issue in question is of greater personal importance to an individual (i.e., greater ego involvement; Sherif et al., 1973), which is frequently the case given the highly politicized and divided nature of topics on which misinformation proliferates, the latitude of rejection expands, heightening the likelihood of contrast effect wherein an advocated position is perceived more distant from their position than it actually is (Sherif et al., 1965; Sherif and Hovland, 1961). Consequently, existing beliefs in misinformation fail to be corrected and might be further reinforced.

There has been evidence that conservatives are more likely than liberals to prioritize values of conformity and tradition, possess a stronger desire to share reality with like-minded others, and perceive within-group consensus when making judgments (Jost et al., 2018). Along with the features of misinformation such as false dichotomy, scapegoating, and hominem attacks (Roozenbeek et al., 2022), these motivational tendencies of the conservatives make them more susceptible to misinformation than liberals (Garrett and Bond, 2021), although both conservatives and liberals are gullible (e.g., van der Linden et al., 2020). Conservatives are also more likely than liberals to be influenced by relational cues and sources who are similar to themselves, maintain homogenous social networks, and favor echo chamber environments (Jost et al., 2018). Hence, we have Propositions 3 and 4:

Proposition 3: Consistency- and self-identity-related motivations are positively associated with misinformation effects and persistence.

Proposition 4: The impact of misinformation is more pronounced and its duration more persistent among conservatives than liberals.

Cognitive fallacies

Biased responses may occur due to cognitive fallacies in the absence of motivations (MacCoun, 1998). A wide range of cognitive fallacies might be relevant to understanding the persistence of misinformation within the framework of the processing of counterattitudinal information. Here, we focus on a few most pervasive ones that are closely related to (selective) exposure to information and the subsequent biases in decision-making.

Epistemic egocentrism is a form of perspective-taking failure where individuals are unable to set aside their privileged information that they know is unavailable to others, leading to predictions that skew others’ perspectives toward one’s own (Royzman et al., 2003). This tendency means that people have a hard time adopting viewpoints that are not their own and struggle to process and understand counterattitudinal information, such as corrective information (Shen and Zhou, 2021). The failure to take others’ perspectives may closely interplay with the conviction that one’s own perspectives objectively reflect reality and that those who do not share the same perspectives are uninformed and incompetent, a bias termed objectivity illusion (Schwalbe et al., 2020). The illusion that one is immune to bias (see also biased blind spot, Pronin et al., 2002) exacerbates the difficulty of rectifying misconceptions as attitude-incongruent messages are dismissed as distorted and irrational.

Choice blindness is a fallacy closely related to epistemic egocentrism. Choice blindness refers to the inability to detect a discrepancy between one’s intended choice and the choice presented to them (Johansson et al., 2005, 2006). It is the tendency to be unaware that one’s choices and preferences have been changed or manipulated after the decision is already made. Epistemic egocentrism leads people to (1) look inward to examine their own thoughts and inner states such as emotions, preexisting judgments, and perceptions, (2) (incorrectly) believe they fully understand the roots of their inner states, but (3) assume that other people’s introspections are largely unreliable. When people have formed their positions and opinions based on misinformation, more or less due to manipulations, choice blindness means they tend to rationalize and justify this manipulated decision and adjust their attitudes to align with the misinformation (Stille et al., 2017), while deeming (counterattitudinal) facts and evidence as from other people’s inner states, hence largely unreliable.

Confirmation bias, is the tendency to seek, interpret, and remember evidence in ways that favor existing beliefs or expectations (Nickerson, 1998; Oswald and Grosjean, 2004), can be considered a special case of epistemic egocentrism. It can occur without the presence of motivations and, in some cases, involuntarily, such that people may fall for this bias even when they have no obvious personal interest (Gilead et al., 2019; Nickerson, 1998), leading misinformation-believing individuals to reject correction regardless of their involvement (Zhou and Shen, 2022). One consequence of confirmation bias is selective exposure, a tendency for individuals to preferentially seek, attend to, and engage with information that is consistent with their inner states (i.e., preexisting beliefs, values, and attitudes), positive in valence, high in sensational value, and easy to process. Often co-occurring with confirmation bias and selective exposure is another cognitive fallacy, illusory correlation. Illusory correlation describes the fallacy of perceiving a correlation where none exists or perceiving a stronger correlation than it really is (Hamilton and Rose, 1980). The (mis)belief in the MMR vaccine–autism link and the (mis)belief that the influenza vaccine gives one the flu are good examples of illusory correlation. It should be noted that illusory correlation happens not only to lay individuals but also to well-trained social scientists—type I error is not uncommon. Publication bias and self-confirmation bias often drive some scientists to cling to their theories in the face of disconfirming data that call for the rejection of their own theories (Kuhn, 1962). Underlying this bias is the process where preexisting attitudes or beliefs prime individuals to search for supportive evidence, even when it is lacking, which allows one to maintain their currently held position.

Research on the continued influence effect of misinformation offers insights into how mental models and memory processes contribute to the persistence of misinformation. People construct causal chains of events (van den Broek, 1990), in which misinformation may play a causal role (Johnson and Seifert, 1994). Correction, especially when it does not provide a causal alternative, can disrupt the causal structure supported by misinformation and leave people with an incomplete model, such that people continue to resort to misinformation for comprehension (Johnson and Seifert, 1994). Theories regarding information retrieval offer alternative, complementary explanations. People are susceptible to various reactivation and retrieval failures, such as misattributing the source of the misinformation (vs. corrective information), insufficiently linking a correction to the misinformation in memory such that misinformation is retrieved unchecked despite the correction, and selectively retrieving misinformation in an automatic manner where the false tag attached to it by correction is not co-activated (for a review, see Ecker et al., 2022; Lewandowsky et al., 2012). Furthermore, insofar as misconceptions are formed in memory and that they are often inevitably repeated in a correction, misinformation becomes easier to process with greater familiarity, fluency, and accessibility, which may serve as cues that imply its truth and hedonic value and lead to more positive evaluations (Alter and Oppenheimer, 2009; Swire et al., 2017; Winkielman et al., 2003). Indeed, while corrections may suppress misinformation, they do not replace it (Gordon et al., 2017; Shtulman and Valcarcel, 2012); rather, they coexist and compete such that corrections have to override and inhibit misinformation for beliefs to be updated (Potvin et al., 2015; Trevors, 2019). This process, however, requires effortful, deliberate thinking, which might be hindered by cognitive miserliness or the tendency to default to less costly processing mechanisms (Stanovich, 2021; Pennycook and Rand, 2019). Hence, we have Proposition 5:

Proposition 5: The impact and persistence of misinformation are driven by various cognitive fallacies, with or without the motivational forces.

Persistence of misinformation effects as consequences

Collectively, the motivation and cognitive mechanisms reviewed in the previous section can give rise to various forms of resistance to corrective efforts, which ultimately contribute to the polarization and the persistence of misinformation effects. First, individuals might engage in selective exposure and attention where they selectively access and attend to (mis)information that is consistent with their prior belief and avoid information that contradicts it (Guess et al., 2018; Hart et al., 2009; Knobloch-Westerwick and Meng, 2009), a strategy that is further facilitated by social media environments (Franziska et al., 2019; Spohr, 2017). This means that corrections have a difficult time reaching their target audience in a naturalist setting and, even when it does, individuals tend to place limited cognitive resources on it and are less likely to retain it.

Resistance may also manifest as biased assimilation and weighting. Biased assimilation is the tendency to perceive attitude-congruent information as more valid than attitude-incongruent information (Ahluwalia, 2000; Lord et al., 1979). When corrective information is difficult to refute, individuals resort to relative weighting where they assign less weight to attitude-inconsistent attributes and attach more weight to attitude-consistent ones in their decision-making (Ahluwalia, 2000). They may also reduce the impact of attitude-inconsistent information on their overall judgment of or attitude toward an issue (Ahluwalia, 2000). Put simply, individuals may utilize the information in a distorted way that allows them to sustain their existing beliefs.

Biased perception represents another group of strategies that are frequently employed in response to corrections. It is concerned with cognitive responses to different aspects of a persuasive message, such as its content/arguments, source, and intent (Shen et al., 2009). This can be reflected as, for example, source derogation, which involves questioning the credibility and/or expertise of the source of counterattitudinal information (Cameron et al., 2002; Zhou and Shen, 2022). Similarly, people may develop counterarguments against and refute the content of the attitude-inconsistent message (Taber and Lodge, 2006) as well as challenge the strategy used in the message (Fransen et al., 2015). They may also perceive a stronger manipulative intent from corrective messages (Shen et al., 2009), which is known to trigger reactance (Brehm, 1966), a motivational state marked by anger and critical cognitions that leads to persuasion failure (Dillard and Shen, 2005).

The ultimate manifestation of resistance lies in the boomerang or backfire effect (Hart and Nisbet, 2012; Nyhan and Reifler, 2010), which occurs when a corrective message produces effects opposite to what is intended, resulting in individuals developing an even stronger belief in misinformation. This effect, alongside selective exposure and attention, biased assimilation and weighting, and biased perception, in turn, catalyzes polarization not only on an individual scale where belief in misinformation becomes more entrenched and extreme but also at the societal level such that its dividedness is further exacerbated. Together, these processes illuminate why corrective efforts fail and why the effects of misinformation persist.

Inoculation as a prebunking strategy

As we have reviewed above, the intricate interplay of various motivational and cognitive mechanisms driving individuals who have a prior belief in misinformation prompts the adoption of diverse strategies to resist correction and preserve their existing view. As such, unsurprisingly, correction strategies such as fact-checking often prove ineffective in countering these misconceptions (Ecker et al., 2014; Seifert, 2014; Thorson, 2016). Meta-analytical evidence documented large effects for the persistence of misinformation despite debunking and observed that a more detailed debunking message was associated with a stronger misinformation-persistence effect (Chan et al., 2017). The challenge is particularly apparent when it comes to real-world misinformation, with research suggesting that the effectiveness of debunking real-world misinformation may diminish by 60% in comparison to constructed misinformation due to Walter and Murphy (2018). Indeed, a recent meta-analysis of correction effects in science-relevant misinformation found a non-significant effect, indicating that efforts to debunk misinformation on issues such as COVID-19 and climate change were overall not successful (Chan and Albarracín, 2023). While corrective messages may be more successful in certain instances, it is evident that they do not eliminate the effect of misinformation (Walter and Tukachinsky, 2020). With that in mind, in this section, we focus our discussion on inoculation as a promising prebunking strategy for mitigating misinformation effects.

Psychological inoculation theory

The concept of inoculation in persuasion draws from a medical analogy: just as human bodies can be immunized against viruses, our attitudes and beliefs can also be shielded from persuasive attacks (McGuire, 1964). With vaccination, individuals receive a weakened form of the virus, which stimulates the production of antibodies and strengthens the immune system without causing illness itself to safeguard against potential threats from the virus. Similarly, in the persuasion context, a mild version of a counterattitudinal message that can activate defense mechanisms but is not too strong to persuade can confer resistance to counterinfluence.

There are two main parts to an inoculation message: a forewarning message indicating an impending attack on one’s current views and a refutational preemption treatment that provides content that one may employ to refute challenges to their views (Pfau et al., 1997; Compton, 2013). To elaborate, a forewarning message is closely relevant to a perceived threat as one of the key mechanisms in inoculation theory. A prerequisite for inoculation to succeed is for people to develop threat perceptions as they lead to the realization of the vulnerability of their beliefs, which in turn motivates the building of defense systems (Compton and Pfau, 2005; Petty and Cacioppo, 1986). Although the mere presence of counterattitudinal messages can generate threat perceptions (i.e., intrinsic threat, McGuire, 1964), the presence of a forewarning message tends to be more effective (McGuire and Papageorgis, 1962). Refutational preemption, on the contrary, involves a two-sided message that presents a weakened form of arguments from an anticipated attack message along with counterarguments against the arguments. In addition to this more passive refutation approach, individuals are sometimes instructed to develop their own refutation material (Compton and Ivanov, 2013; McGuire and Papageorgis, 1961). Regardless of the approach, refutational preemption serves two main functions: to provide content to be used to refute potential attacks and to allow the practice of counterarguing (Compton and Pfau, 2005; Wyer, 1974). It should be noted that counterarguing, a cognitive process that occurs postinoculation treatment, is not the same and goes beyond refutational materials presented in a refutational preemption treatment (Compton, 2013). It is this cognitive process—motivated by the perceived threat from impending attacks—that confers resistance (Insko, 1967).

Notably, while McGuire and colleagues initially limited the beliefs that could be protected by inoculation to cultural truisms (i.e., widely shared beliefs such as the benefits of tooth brushing where an attack seems impossible, McGuire, 1964), the application of inoculation is no longer confined to non-controversial topics. Indeed, the theory has been shown to successfully confer resistance to attitudes in a wide range of controversial domains such as genetically modified food and vaccines (Banas and Rains, 2010; Compton and Pfau, 2005). In addition, much has been done to address whether the efficacy of inoculation is observed when the attack message presents the same arguments that are countered in the refutational preemption treatment (i.e., “refutational-same”) extends to situations where the attack message raises novel arguments that differ from those addressed in the treatment (i.e., refutational-different). Evidence from decades of research shows that “refutational-same and -different” treatments are equally effective (Banas and Rains, 2010; McGuire, 1962), suggesting that inoculation offers an umbrella or blanket of protection. Further, inoculation may even offer cross-protection such that its protection spillover from one topic to other related topics (e.g., from condom use to binge drinking, Parker et al., 2012, 2016).

Applying inoculation to prebunk misinformation

Given that inoculation serves as a preemptive measure to counter misinformation before its adoption, it likely provides a more effective solution than corrective measures applied after exposure. Research applying inoculation to mitigate misinformation has generated important insights, supporting its utility. The majority of the early study was done in the context of climate change. For example, van der Linden et al. (2017a), testing the effectiveness of inoculation against the prevailing misinformation that there is no consensus on human-caused climate change, found that the positive impact of a message emphasizing the scientific consensus on the issue was largely preserved by inoculation. Importantly, this effect held for both Democrats and Republicans. In addition, research shows that inoculation produces full protection against climate change misinformation with a one-week delay between the treatment and the attack, demonstrating the longevity of the inoculation effects (Maertens et al., 2020). Beyond climate change, inoculation has been shown to be effective in conferring resistance to misinformation in the realms of health (e.g., Jiang et al., 2022; Geegan et al., 2023), politics (e.g., Zerback et al., 2021), and marketing (e.g., Boman and Schneider, 2021), to name a few.

Building on the umbrella protection effect, scholars have further extended this research by developing a “broad-spectrum immunity” approach to inoculation that is not specific to the claims or the topics of misinformation (Lewandowsky and Van Der Linden, 2021). Cook et al.’s (2017) study represents one of the earliest studies that took this approach. In their study, they developed logic-based refutations that put into question the misleading techniques underlying misinformation. The findings of their study suggested that this strategy was effective in neutralizing the effect of misinformation (Cook et al., 2017). Similarly, Roozenbeek and van der Linden (2019a) found that playing a game that involves creating news articles using various misleading tactics successfully inoculated individuals against fake news. Testing of specific techniques that are commonly used in misinformation, such as impersonation, use of emotional language, polarization, conspiracy theories, trolling, discrediting, ad hominem attacks, incoherent arguments, scapegoating, and false dichotomies, showed that inoculation against these techniques, either with an active or passive approach, improved people’s ability to resist misinformation (Roozenbeek et al., 2022; Roozenbeek and van der Linden, 2019b). In sum, there is strong evidence that inoculation that targets manipulation tactics employed in the production of misinformation can effectively generate broad-spectrum immunity to misinformation (see also Basol et al., 2020, 2021; Maertens et al., 2021; Roozenbeek et al., 2020, 2021; Roozenbeek and van der Linden, 2020).

Another important extension of inoculation scholarship focuses on its potential to spread protection to build a societal-level immunity against misinformation, analogous to herd immunity in a medical context. When a large population is inoculated and becomes immune to misinformation, we may effectively limit and control its spread (Lewandowsky and Van Der Linden, 2021; van der Linden et al., 2017b). There is growing evidence that inoculation not only builds resistance but also increases willingness to talk about contested issues (Lin and Pfau, 2007; Lin, 2022). Postinoculation interpersonal conversations, in turn, reinforce the treatment effects and spread the treatment to a larger audience (Ivanov et al., 2012). Indeed, Ivanov et al. (2015) observed that inoculation increased advocacy-driven talk, in which individuals share both treatment-specified and novel arguments, shedding light on its diffusing potential. Much research on the effect of campaign-induced interpersonal communication suggests that the treatment diffused through postinoculation talk can then successfully build resistance among its recipients (Dillard et al., 2022; Jeong and Bae, 2018; Southwell and Yzer, 2007). In sum, inoculation may promote interpersonal talk that spreads the treatment, which in turn may protect the conversation partners from misinformation and ultimately, through this process, help build immunity at a larger scale. It is no surprise that the psychological inoculation strategy has witnessed increased applications (Serhan, 2024). Hence, we have Proposition 6:

Proposition 6: Psychological inoculation is an effective strategy to mitigate the impact and persistence of misinformation.

Discussion

In this article, framing responses to corrective information as a particular case of processing counterattitudinal information, we offer a motivational and cognitive account of the persistence of misinformation. Individuals are often motivated, sometimes automatically so, to preserve their existing attitudes. Even in the absence of motivations, the process of changing and updating beliefs has proven to be challenging due to various cognitive fallacies and mental mechanisms. Together, these motivational and cognitive factors give rise to multiple forms of resistance strategies, which in turn contribute to the entrenchment of misinformation effects. Understanding these underlying processes helps elucidate why debunking efforts frequently fail to be effective. A preemptive strategy, in contrast, has shown promise. By reviewing psychological inoculation theory and its application in prebunking misinformation, we present evidence demonstrating that inoculation against either the claims or manipulative techniques used in misinformation can confer resistance, thereby protecting individuals from misinformation. Further, through interpersonal processes, such inoculation efforts have the potential the spread and establish protection at the societal level.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

YZ: Conceptualization, Formal analysis, Writing – original draft, Writing – review & editing. LS: Conceptualization, Formal analysis, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Acerbi, A. (2019). Cognitive attraction and online misinformation. Palgrave Commun. 5:15. doi: 10.1057/s41599-019-0224-y

Crossref Full Text | Google Scholar

Ahluwalia, R. (2000). Examination of psychological processes underlying resistance to persuasion. J. Consum. Res. 27, 217–232. doi: 10.1086/314321

Crossref Full Text | Google Scholar

Alter, A. L., and Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a metacognitive nation. Personal. Soc. Psychol. Rev. 13, 219–235. doi: 10.1177/1088868309341564

PubMed Abstract | Crossref Full Text | Google Scholar

Banas, J. A., and Rains, S. A. (2010). A meta-analysis of research on inoculation theory. Commun. Monogr. 77, 281–311. doi: 10.1080/03637751003758193

Crossref Full Text | Google Scholar

Basol, M., Roozenbeek, J., Berriche, M., Uenal, F., McClanahan, W. P., and Van Der Linden, S. (2021). Towards psychological herd immunity: cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data Soc. 8:205395172110138. doi: 10.1177/20539517211013868

Crossref Full Text | Google Scholar

Basol, M., Roozenbeek, J., and Van der Linden, S. (2020). Good news about bad news: gamified inoculation boosts confidence and cognitive immunity against fake news. J. Cogn. 3:2. doi: 10.5334/joc.91

PubMed Abstract | Crossref Full Text | Google Scholar

Berlinski, N., Doyle, M., Guess, A. M., Levy, G., Lyons, B., Montgomery, J. M., et al. (2023). The effects of unsubstantiated claims of voter fraud on confidence in elections. J. Exp. Polit. Sci. 10, 34–49. doi: 10.1017/XPS.2021.18

Crossref Full Text | Google Scholar

Boman, C. D., and Schneider, E. J. (2021). Finding an antidote: testing the use of proactive crisis strategies to protect organizations from astroturf attacks. Public Relat. Rev. 47:102004. doi: 10.1016/j.pubrev.2020.102004

Crossref Full Text | Google Scholar

Brehm, J. W. (1966). A theory of psychological reactance. New York, NY: Academic Press.

Google Scholar

Cameron, K. A., Jacks, J. Z., and O’Brien, M. E. (2002). An experimental examination of strategies for resisting persuasion. Curr. Res. Soc. Psychol. 7, 205–224.

Google Scholar

Carrasco-Farré, C. (2022). The fingerprints of misinformation: how deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions. Hum. Soc. Sci. Commun. 9:162. doi: 10.1057/s41599-022-01174-9

Crossref Full Text | Google Scholar

Chaiken, S., Giner-Sorolla, R., and Chert, S. (1996). “Beyond accuracy: defense and impression motives in heuristic and systematic information processing” in The psychology of action: linking cognition and motivation to behavior. eds. P. M. Gollwitzer and J. A. Bargh (New York, NY: Guilford Press), 553–578.

Google Scholar

Chaiken, S., Liberman, A., and Eagly, A. (1989). “Heuristic and systematic processing within and beyond the persuasion context” in Unintended thoughts. eds. J. S. Veleman and J. A. Bargh (New York, NY: Guilford), 212–252.

Google Scholar

Chan, M. P. S., and Albarracín, D. (2023). A meta-analysis of correction effects in science-relevant misinformation. Nat. Hum. Behav. 7, 1514–1525. doi: 10.1038/s41562-023-01623-8

PubMed Abstract | Crossref Full Text | Google Scholar

Chan, M. P. S., Jones, C. R., Hall Jamieson, K., and Albarracín, D. (2017). Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. 28, 1531–1546. doi: 10.1177/0956797617714579

PubMed Abstract | Crossref Full Text | Google Scholar

Compton, J. (2013). “Inoculation theory” in The sage handbook of persuasion: developments in theory and practice. eds. J. P. Dillard and L. Shen. 2nd ed (Thousand Oaks, CA: Sage), 220–236.

Google Scholar

Compton, J., and Ivanov, B. (2013). Vaccinating voters: surveying political campaign inoculation scholarship. Ann. Int. Commun. Assoc. 37, 251–283. doi: 10.1080/23808985.2013.11679152

Crossref Full Text | Google Scholar

Compton, J. A., and Pfau, M. (2005). Inoculation theory of resistance to influence at maturity: recent progress in theory development and application and suggestions for future research. Ann. Int. Commun. Assoc. 29, 97–146. doi: 10.1080/23808985.2005.11679045

Crossref Full Text | Google Scholar

Cook, J., Lewandowsky, S., and Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLoS One 12:e0175799. doi: 10.1371/journal.pone.0175799

PubMed Abstract | Crossref Full Text | Google Scholar

Cornwell, J. F., Franks, B., and Higgins, E. T. (2014). Truth, control, and value motivations: the “what,”“how,” and “why” of approach and avoidance. Front. Syst. Neurosci. 8:194. doi: 10.3389/fnsys.2014.00194

PubMed Abstract | Crossref Full Text | Google Scholar

Cornwell, J. F. M., and Higgins, E. T. (2018). “The tripartite motivational human essence: value, control, and truth working together” in The Oxford handbook of the human essence. eds. M. van Zomeren and J. F. Dovidio (New York, NY: Oxford University Press), 71–81.

Google Scholar

Desai, S. A. C., Pilditch, T. D., and Madsen, J. K. (2020). The rational continued influence of misinformation. Cognition 205:104453. doi: 10.1016/j.cognition.2020.104453

PubMed Abstract | Crossref Full Text | Google Scholar

Dillard, J. P., Li, S. S., and Cannava, K. (2022). Talking about sugar-sweetened beverages: causes, processes, and consequences of campaign-induced interpersonal communication. Health Commun. 37, 316–326. doi: 10.1080/10410236.2020.1838107

PubMed Abstract | Crossref Full Text | Google Scholar

Dillard, J. P., and Shen, L. (2005). On the nature of reactance and its role in persuasive health communication. Commun. Monogr. 72, 144–168. doi: 10.1080/03637750500111815

Crossref Full Text | Google Scholar

Ecker, U. K. H., Lewandowsky, S., Chang, E. P., and Pillai, R. (2014). The effects of subtle misinformation in news headlines. J. Exp. Psychol. Appl. 20, 323–335. doi: 10.1037/xap0000028

PubMed Abstract | Crossref Full Text | Google Scholar

Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., et al. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nat. Rev. Psychol. 1, 13–29. doi: 10.1038/s44159-021-00006-y

Crossref Full Text | Google Scholar

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.

Google Scholar

Fransen, M. L., Smit, E. G., and Verlegh, P. W. (2015). Strategies and motives for resistance to persuasion: an integrative framework. Front. Psychol. 6:146377. doi: 10.3389/fpsyg.2015.01201

PubMed Abstract | Crossref Full Text | Google Scholar

Franziska, Z., Katrin, S., and Mechtild, S. (2019). Fake news in social media: bad algorithms or biased users? J. Inform. Sci. Theory Pract. 7, 40–53. doi: 10.1633/JISTaP.2019.7.2.4

Crossref Full Text | Google Scholar

Garett, R., and Young, S. D. (2021). Online misinformation and vaccine hesitancy. Transl. Behav. Med. 11, 2194–2199. doi: 10.1093/tbm/ibab128

PubMed Abstract | Crossref Full Text | Google Scholar

Garrett, R. K., and Bond, R. M. (2021). Conservatives’ susceptibility to political misperceptions. Sci. Adv. 7:eabf1234. doi: 10.1126/sciadv.abf1234

Crossref Full Text | Google Scholar

Geegan, S., Ivanov, B., and Parker, K. A. (2023). Inoculation within character limits: terse messages to promote Gen Z mental health. J. Commun. Media Stud. 8, 65–86. doi: 10.18848/2470-9247/CGP/v08i02/65-86

Crossref Full Text | Google Scholar

Gilead, M., Sela, M., and Maril, A. (2019). That’s my truth: evidence for involuntary opinion confirmation. Soc. Psychol. Personal. Sci. 10, 393–401. doi: 10.1177/1948550618762300

Crossref Full Text | Google Scholar

Gordon, A., Brooks, J. C. W., Quadflieg, S., Ecker, U. K. H., and Lewandowsky, S. (2017). Exploring the neural substrates of misinformation processing. Neuropsychologia 106, 216–224. doi: 10.1016/j.neuropsychologia.2017.10.003

PubMed Abstract | Crossref Full Text | Google Scholar

Greene, C. M., and Murphy, G. (2021). Quantifying the effects of fake news on behavior: evidence from a study of COVID-19 misinformation. J. Exp. Psychol. Appl. 27, 773–784. doi: 10.1037/xap0000371

Crossref Full Text | Google Scholar

Guess, A., Nyhan, B., and Reifler, J. (2018). Selective exposure to misinformation: evidence from the consumption of fake news during the 2016 US presidential campaign. Available at: https://apo.org.au/node/126961 (Accessed April 15, 2024).

Google Scholar

Hamilton, D. L., and Rose, T. L. (1980). Illusory correlation and the maintenance of stereotypic beliefs. J. Pers. Soc. Psychol. 39, 832–845. doi: 10.1037/0022-3514.39.5.832

Crossref Full Text | Google Scholar

Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., and Merrill, L. (2009). Feeling validated versus being correct: a meta-analysis of selective exposure to information. Psychol. Bull. 135, 555–588. doi: 10.1037/a0015701

PubMed Abstract | Crossref Full Text | Google Scholar

Hart, P. S., and Nisbet, E. C. (2012). Boomerang effects in science communication: how motivated reasoning and identity cues amplify opinion polarization about climate mitigation policies. Commun. Res. 39, 701–723. doi: 10.1177/0093650211416646

Crossref Full Text | Google Scholar

Heider, F. (1946). Attitudes and cognitive organization. J. Psychol. 21, 107–112. doi: 10.1080/00223980.1946.9917275

Crossref Full Text | Google Scholar

Heinström, J., Nikou, S., and Sormunen, E. (2022). Avoiding negative information as a strategy for emotion regulation. Inf. Res. 27:isic2229. doi: 10.47989/irisic2229

Crossref Full Text | Google Scholar

Higgins, E. T. (2014). Beyond pleasure and pain: how motivation works. New York, NY: Oxford University Press.

Google Scholar

Insko, C. A. (1967). Theories of attitude change. New York, NY: Appleton-Century-Crofts.

Google Scholar

Ivanov, B., Parker, K. A., and Pfau, M. (2012). The interaction effect of attitude base and multiple attacks on the effectiveness of inoculation. Commun. Res. Rep. 29, 1–11. doi: 10.1080/08824096.2011.616789

Crossref Full Text | Google Scholar

Ivanov, B., Sims, J. D., Compton, J., Miller, C. H., Parker, K. A., Parker, J. L., et al. (2015). The general content of postinoculation talk: recalled issue-specific conversations following inoculation treatments. West. J. Commun. 79, 218–238. doi: 10.1080/10570314.2014.943423

Crossref Full Text | Google Scholar

Jeong, M., and Bae, R. E. (2018). The effect of campaign-generated interpersonal communication on campaign-targeted health outcomes: a meta-analysis. Health Commun. 33, 988–1003. doi: 10.1080/10410236.2017.1331184

Crossref Full Text | Google Scholar

Jiang, L. C., Sun, M., Chu, T. H., and Chia, S. C. (2022). Inoculation works and health advocacy backfires: building resistance to COVID-19 vaccine misinformation in a low political trust context. Front. Psychol. 13:976091. doi: 10.3389/fpsyg.2022.976091

PubMed Abstract | Crossref Full Text | Google Scholar

Johansson, P., Hall, L., Sikström, S., and Olsson, A. (2005). Failure to detect mismatches between intention and outcome in a simple decision task. Science 310, 116–119. doi: 10.1126/science.1111709

Crossref Full Text | Google Scholar

Johansson, P., Hall, L., Sikström, S., Tärning, B., and Lind, A. (2006). How something can be said about telling more than we can know: on choice blindness and introspection. Conscious. Cogn. 15, 673–692. doi: 10.1016/j.concog.2006.09.004

PubMed Abstract | Crossref Full Text | Google Scholar

Johnson, B. T., and Eagly, A. H. (1989). Effects of involvement on persuasion: a meta-analysis. Psychol. Bull. 106, 290–314. doi: 10.1037/0033-2909.106.2.290

Crossref Full Text | Google Scholar

Johnson, H. M., and Seifert, C. M. (1994). Sources of the continued influence effect: when misinformation in memory affects later inferences. J. Exp. Psychol. Learn. Mem. Cogn. 20, 1420–1436. doi: 10.1037/0278-7393.20.6.1420

Crossref Full Text | Google Scholar

Jost, J. T., Baldassarri, D. S., and Druckman, J. N. (2022). Cognitive–motivational mechanisms of political polarization in social-communicative contexts. Nat. Rev. Psychol. 1, 560–576. doi: 10.1038/s44159-022-00093-5

PubMed Abstract | Crossref Full Text | Google Scholar

Jost, J. T., van der Linden, S., Panagopoulos, C., and Hardin, C. D. (2018). Ideological asymmetries in conformity, desire for shared reality, and the spread of misinformation. Curr. Opin. Psychol. 23, 77–83. doi: 10.1016/j.copsyc.2018.01.003

PubMed Abstract | Crossref Full Text | Google Scholar

Kelman, H. C. (1958). Compliance, identification, and internalization: three processes of attitude change. J. Confl. Resolut. 2, 51–60. doi: 10.1177/002200275800200106

Crossref Full Text | Google Scholar

Kemp, P. L., Loaiza, V. M., and Wahlheim, C. N. (2022). Fake news reminders and veracity labels differentially benefit memory and belief accuracy for news headlines. Sci. Rep. 12, 21829–21813. doi: 10.1038/s41598-022-25649-6

PubMed Abstract | Crossref Full Text | Google Scholar

Knobloch-Westerwick, S., and Meng, J. (2009). Looking the other way: selective exposure to attitude-consistent and counterattitudinal political information. Commun. Res. 36, 426–448. doi: 10.1177/0093650209333030

Crossref Full Text | Google Scholar

Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.

Google Scholar

Kunda, Z. (1990). The case for motivated reasoning. Psychol. Bull. 108, 480–498. doi: 10.1037/0033-2909.108.3.480

Crossref Full Text | Google Scholar

Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., and Cook, J. (2012). Misinformation and its correction: continued influence and successful debiasing. Psychol. Sci. Public Interest 13, 106–131. doi: 10.1177/1529100612451018

Crossref Full Text | Google Scholar

Lewandowsky, S., and Van Der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. Eur. Rev. Soc. Psychol. 32, 348–384. doi: 10.1080/10463283.2021.1876983

Crossref Full Text | Google Scholar

Lin, W. K. (2022). Enhancing inoculation in the spiral of silence to promote resistance to attacks: examining public opinion on Taiwan-PRC relations. Asian J. Public Opin. Res. 10, 149–177. doi: 10.15206/ajpor.2022.10.3.149

Crossref Full Text | Google Scholar

Lin, W. K., and Pfau, M. (2007). Can inoculation work against the spiral of silence? A study of public opinion on the future of Taiwan. Int. J. Public Opin. Res. 19, 155–172. doi: 10.1093/ijpor/edl030

Crossref Full Text | Google Scholar

Lord, C. G., Ross, L., and Lepper, M. R. (1979). Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. J. Pers. Soc. Psychol. 37, 2098–2109. doi: 10.1037/0022-3514.37.11.2098

Crossref Full Text | Google Scholar

MacCoun, R. J. (1998). Biases in the interpretation and use of research results. Annu. Rev. Psychol. 49, 259–287. doi: 10.1146/annurev.psych.49.1.259

Crossref Full Text | Google Scholar

Maertens, R., Anseel, F., and van der Linden, S. (2020). Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J. Environ. Psychol. 70:101455. doi: 10.1016/j.jenvp.2020.101455

Crossref Full Text | Google Scholar

Maertens, R., Roozenbeek, J., Basol, M., and van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: three longitudinal experiments. J. Exp. Psychol. Appl. 27, 1–16. doi: 10.1037/xap0000315

PubMed Abstract | Crossref Full Text | Google Scholar

McGuire, W. J. (1962). Persistence of the resistance to persuasion induced by various types of prior belief defenses. J. Abnorm. Soc. Psychol. 64, 241–248. doi: 10.1037/h0044167

Crossref Full Text | Google Scholar

McGuire, W. J. (1964). Inducing resistance to persuasion: some contemporary approaches. Adv. Exp. Soc. Psychol. 1, 191–229. doi: 10.1016/S0065-2601(08)60052-0

Crossref Full Text | Google Scholar

McGuire, W. J., and Papageorgis, D. (1961). The relative efficacy of various types of prior belief-defense in producing immunity against persuasion. J. Abnorm. Soc. Psychol. 62, 327–337. doi: 10.1037/h0042026

PubMed Abstract | Crossref Full Text | Google Scholar

McGuire, W. J., and Papageorgis, D. (1962). Effectiveness of forewarning in developing resistance to persuasion. Public Opin. Q. 26, 24–34. doi: 10.1086/267068

Crossref Full Text | Google Scholar

Nan, X., Thier, K., and Wang, Y. (2023). Health misinformation: what it is, why people believe it, how to counter it. Ann. Int. Commun. Assoc. 47, 381–410. doi: 10.1080/23808985.2023.2225489

Crossref Full Text | Google Scholar

Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220. doi: 10.1037/1089-2680.2.2.175

Crossref Full Text | Google Scholar

Nyhan, B. (2021). Why the backfire effect does not explain the durability of political misperceptions. Proc. Natl. Acad. Sci. U. S. A. 118:e1912440117. doi: 10.1073/pnas.1912440117

PubMed Abstract | Crossref Full Text | Google Scholar

Nyhan, B., and Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior. 32, 303–330. doi: 10.1007/s11109-010-9112-2

PubMed Abstract | Crossref Full Text | Google Scholar

Osgood, C. E., and Tannenbaum, P. H. (1955). The principle of congruity in the prediction of attitude change. Psychol. Rev. 62, 42–55. doi: 10.1037/h0048153

PubMed Abstract | Crossref Full Text | Google Scholar

Oswald, M. E., and Grosjean, S. (2004). “Confirmation bias” in Cognitive illusions: a handbook on fallacies and biases in thinking, judgment and memory. ed. R. F. Pohl (London: Psychology Press), 79–96.

Google Scholar

Parker, K. A., Ivanov, B., and Compton, J. (2012). Inoculation's efficacy with young adults’ risky behaviors: can inoculation confer cross-protection over related but untreated issues? Health Commun. 27, 223–233. doi: 10.1080/10410236.2011.575541

PubMed Abstract | Crossref Full Text | Google Scholar

Parker, K. A., Rains, S. A., and Ivanov, B. (2016). Examining the “blanket of protection” conferred by inoculation: the effects of inoculation messages on the cross-protection of related attitudes. Commun. Monogr. 83, 49–68. doi: 10.1080/03637751.2015.1030681

Crossref Full Text | Google Scholar

Pennycook, G., and Rand, D. G. (2019). Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 39–50. doi: 10.1016/j.cognition.2018.06.011

Crossref Full Text | Google Scholar

Petty, R. E., and Cacioppo, J. T. (1986). “The elaboration likelihood model of persuasion” in Advances in experimental social psychology. ed. Berkowitz, vol. 19 (New York, NY: Academic Press), 123–205.

Google Scholar

Pfau, M., Tusing, K. J., Koerner, A. F., Lee, W., Godbold, L. C., Penaloza, L. J., et al. (1997). Enriching the inoculation construct: the role of critical components in the process of resistance. Hum. Commun. Res. 24, 187–215. doi: 10.1111/j.1468-2958.1997.tb00413.x

Crossref Full Text | Google Scholar

Potvin, P., Masson, S., Lafortune, S., and Cyr, G. (2015). Persistence of the intuitive conception that heavier objects sink more: a reaction time study with different levels of interference. Int. J. Sci. Math. Educ. 13, 21–43. doi: 10.1007/s10763-014-9520-6

Crossref Full Text | Google Scholar

Pronin, E., Lin, D., and Ross, L. (2002). The bias blind spot: perceptions of bias in self and others. Personal. Soc. Psychol. Bull. 28, 369–381. doi: 10.1177/0146167202286008

Crossref Full Text | Google Scholar

Putman, A. L., Sungkhasettee, V. W., and Roediger, H. L. III. (2017). When misinformation improves memory: the effects of recollecting change. Psychol. Sci. 28, 36–46. doi: 10.1177/0956797616672268

PubMed Abstract | Crossref Full Text | Google Scholar

Roozenbeek, J., Maertens, R., McClanahan, W., and van der Linden, S. (2021). Disentangling item and testing effects in inoculation research on online misinformation: Solomon revisited. Educ. Psychol. Meas. 81, 340–362. doi: 10.1177/0013164420940378

PubMed Abstract | Crossref Full Text | Google Scholar

Roozenbeek, J., and van der Linden, S. (2019a). The fake news game: actively inoculating against the risk of misinformation. J. Risk Res. 22, 570–580. doi: 10.1080/13669877.2018.1443491

Crossref Full Text | Google Scholar

Roozenbeek, J., and van der Linden, S. (2019b). Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 5, 1–10. doi: 10.1057/s41599-019-0279-9

Crossref Full Text | Google Scholar

Roozenbeek, J., and van der Linden, S. (2020). Breaking harmony square: a game that “inoculates” against political misinformation. Harvard Kennedy Sch. Misinform. Rev. 1. doi: 10.37016/mr-2020-47

Crossref Full Text | Google Scholar

Roozenbeek, J., Van Der Linden, S., Goldberg, B., Rathje, S., and Lewandowsky, S. (2022). Psychological inoculation improves resilience against misinformation on social media. Sci. Adv. 8:eabo6254. doi: 10.1126/sciadv.abo6254

PubMed Abstract | Crossref Full Text | Google Scholar

Roozenbeek, J., van der Linden, S., and Nygren, T. (2020). Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harvard Kennedy Sch. Misinform. Rev. 1. doi: 10.37016//mr-2020-008

Crossref Full Text | Google Scholar

Royzman, E. B., Cassidy, K. W., and Baron, J. (2003). “I know, you know”: epistemic egocentrism in children and adults. Rev. Gen. Psychol. 7, 38–65. doi: 10.1037/1089-2680.7.1.38

Crossref Full Text | Google Scholar

Schmid, P., Altay, S., and Scherer, L. D. (2023). The psychological impacts and message features of health misinformation: a systematic review of randomized controlled trials. Eur. Psychol. 28, 162–172. doi: 10.1027/1016-9040/a000494

Crossref Full Text | Google Scholar

Schwalbe, M. C., Cohen, G. L., and Ross, L. D. (2020). The objectivity illusion and voter polarization in the 2016 presidential election. Proc. Natl. Acad. Sci. 117, 21218–21229. doi: 10.1073/pnas.1912301117

PubMed Abstract | Crossref Full Text | Google Scholar

Seifert, C. M. (2014). “The continued influence effect: the persistence of misinformation in memory and reasoning following correction” in Processing inaccurate information: theoretical and applied perspectives from cognitive science and the educational science. eds. D. N. Rapp and J. L. G. Braasch (Cambridge, MA: MIT Press), 39–71.

Google Scholar

Serhan, Y. (2024). Inside Google’s plans to combat misinformation ahead of the E.U. Elections. Time Mag. Available at: https://time.com/6970488/google-jigsaw-eu-elections-misinformation-prebunking/ (Accessed April 27, 2024).

Google Scholar

Sharot, T., and Sunstein, C. R. (2020). How people decide what they want to know. Nat. Hum. Behav. 4, 14–19. doi: 10.1038/s41562-019-0793-1

PubMed Abstract | Crossref Full Text | Google Scholar

Shen, L., Monahan, J. L., Rhodes, N., and Roskos-Ewoldsen, D. R. (2009). The impact of attitude accessibility and decision style on adolescents’ biased processing of health-related public service announcements. Commun. Res. 36, 104–128. doi: 10.1177/0093650208326466

Crossref Full Text | Google Scholar

Shen, L., Price Dillard, J., Tian, X., Cruz, S., and Smith, R. A. (2022). The role of fatigue in a campus COVID-19 safety behaviors campaign. Am. Behav. Sci. :000276422211246. doi: 10.1177/00027642221124668

Crossref Full Text | Google Scholar

Shen, L., and Zhou, Y. (2021). Epistemic egocentrism and processing of vaccine misinformation (Vis-à-Vis scientific evidence): the case of vaccine–autism link. Health Commun. 36, 1405–1416. doi: 10.1080/10410236.2020.1761074

PubMed Abstract | Crossref Full Text | Google Scholar

Sherif, M., and Hovland, C. I. (1961). Social judgment: assimilation and contrast effects in communication and attitude change. New Haven, CT: Yale University Press.

Google Scholar

Sherif, C. W., Kelly, M., Rodgers, H. L. Jr., Sarup, G., and Tittler, B. I. (1973). Personal involvement, social judgment, and action. J. Pers. Soc. Psychol. 27, 311–328. doi: 10.1037/h0034948

Crossref Full Text | Google Scholar

Sherif, C. W., Sherif, M., and Nebergall, R. E. (1965). Attitude and attitude change: The social judgment-involvement approach. Philadelphia, PA: Saunders.

Google Scholar

Sherman, D. K., and Cohen, G. L. (2006). “The psychology of self-defense: self-affirmation theory” in Advances in experimental social psychology. ed. M. P. Zanna , vol. 38 (Amsterdam: Elsevier Academic Press), 183–242.

Google Scholar

Shtulman, A., and Valcarcel, J. (2012). Scientific knowledge suppresses but does not supplant earlier intuitions. Cognition 124, 209–215. doi: 10.1016/j.cognition.2012.04.005

Crossref Full Text | Google Scholar

Soll, J. (2016). The long and brutal history of fake news. POLITICO Magazine. Available at: http://politi.co/2FaV5W9 (Accessed April 15, 2024).

Google Scholar

Southwell, B. G., and Yzer, M. C. (2007). The roles of interpersonal communication in mass media campaigns. Ann. Int. Commun. Assoc. 31, 420–462. doi: 10.1080/23808985.2007.11679072

Crossref Full Text | Google Scholar

Spohr, D. (2017). Fake news and ideological polarization: filter bubbles and selective exposure on social media. Bus. Inf. Rev. 34, 150–160. doi: 10.1177/0266382117722446

Crossref Full Text | Google Scholar

Staender, A., Humprecht, E., Esser, F., Morosoli, S., and Van Aelst, P. (2022). Is sensationalist disinformation more effective? Three facilitating factors at the national, individual, and situational level. Digit. J. 10, 976–996. doi: 10.1080/21670811.2021.1966315

Crossref Full Text | Google Scholar

Stanovich, K. E. (2021). “Why humans are cognitive misers and what it means for the great rationality debate” in Routledge handbook of bounded rationality. ed. R. Viale (Oxford: Routledge), 196–206.

Google Scholar

Steele, C. M. (1988). “The psychology of self-affirmation: sustaining the integrity of the self” in Advances in experimental social psychology. ed. L. Berkowitz , vol. 21 (New York, NY: Academic Press), 261–302.

Google Scholar

Steele, C. M., and Spencer, S. J. (1992). The primacy of self-integrity. Psychol. Inq. 3, 345–346. doi: 10.1207/s15327965pli0304_14

Crossref Full Text | Google Scholar

Stille, L., Norin, E., and Sikström, S. (2017). Self-delivered misinformation - merging the choice blindness and misinformation effect paradigms. PLoS One 12:e0173606. doi: 10.1371/journal.pone.0173606

PubMed Abstract | Crossref Full Text | Google Scholar

Swire, B., Ecker, U. K. H., and Lewandowsky, S. (2017). The role of familiarity in correcting inaccurate information. J. Exp. Psychol. Learn. Mem. Cogn. 43, 1948–1961. doi: 10.1037/xlm0000422

PubMed Abstract | Crossref Full Text | Google Scholar

Taber, C. S., and Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. Am. J. Polit. Sci. 50, 755–769. doi: 10.1111/j.1540-5907.2006.00214.x

Crossref Full Text | Google Scholar

Thorson, E. (2016). Belief echoes: the persistent effects of corrected misinformation. Polit. Commun. 33, 460–480. doi: 10.1080/10584609.2015.1102187

Crossref Full Text | Google Scholar

Treen, K. M. D., Williams, H. T. P., and O'Neill, S. J. (2020). Online misinformation about climate change. WIREs Clim. Change 11:e665. doi: 10.1002/wcc.665

Crossref Full Text | Google Scholar

Trevors, G. (2019). “Psychological tribes and processes: understanding why and how misinformation persists” in Misinformation, “quackery,” and “fake news” in education. eds. P. Kendeou and D. Robinson (Charlotte, NC: Information Age Publishing), 55–80.

Google Scholar

Uri, J . (2020). 90 Years of our changing views of Earth. Available at: https://www.nasa.gov/history/90-years-of-our-changing-views-of-earth/ (Accessed April 15, 2024).

Google Scholar

van den Broek, P. (1990). “Causal inferences and the comprehension of narrative texts” in Psychology of learning and motivation: inferences and text comprehension. eds. A. Graesser and G. Bower, vol. 25 (San Diego, CA: Academic Press), 175–196.

Google Scholar

van der Linden, S., Albarracín, D., Fazio, L., Freelon, D., Roozenbeek, J., Swire-Thompson, B., et al. (2023). Using psychological science to understand and fight health misinformation: an APA consensus statement. American Psychological Association. Available at: https://www.apa.org/pubs/reports/health-misinformation

Google Scholar

van der Linden, S., Leiserowitz, A., Rosenthal, S., and Maibach, E. (2017a). Inoculating the public against misinformation about climate change. Global Chall. 1:1600008. doi: 10.1002/gch2.201600008

PubMed Abstract | Crossref Full Text | Google Scholar

van der Linden, S., Maibach, E., Cook, J., Leiserowitz, A., and Lewandowsky, S. (2017b). Inoculating against misinformation. Science 358:1141. doi: 10.1126/science.aar4533

Crossref Full Text | Google Scholar

van der Linden, S., Panagolous, C., and Roozenbeek, J. (2020). You are fake news: political bias in perceptions of fake news. Media Cult. Soc. 42, 460–470. doi: 10.1177/0163443720906992

Crossref Full Text | Google Scholar

Vraga, E. K., and Bode, L. (2020). Defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation. Polit. Commun. 37, 136–144. doi: 10.1080/10584609.2020.1716500

Crossref Full Text | Google Scholar

Vrinten, J., Van Royen, K., Pabian, S., De Backer, C., and Matthys, C. (2022). Motivations for nutrition information-seeking behavior among Belgian adults: a qualitative study. BMC Public Health 22:2432. doi: 10.1186/s12889-022-14851-w

PubMed Abstract | Crossref Full Text | Google Scholar

Walter, N., and Murphy, S. T. (2018). How to unring the bell: a meta-analytic approach to correction of misinformation. Commun. Monogr. 85, 423–441. doi: 10.1080/03637751.2018.1467564

Crossref Full Text | Google Scholar

Walter, N., and Tukachinsky, R. (2020). A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun. Res. 47, 155–177. doi: 10.1177/0093650219854600

Crossref Full Text | Google Scholar

Winkielman, P., Schwarz, N., Fazendeiro, T. A., and Reber, R. (2003). “The hedonic marking of processing fluency: implications for evaluative judgment” in The psychology of evaluation: affective processes in cognition and emotion. eds. J. Musch and K. C. Klauer (Mahwah, NJ: Lawrence Erlbaum Associates Publishers), 189–217.

Google Scholar

Wyer, R. S. (1974). Cognitive organization and change: an information processing approach. Potomac, MD: Taylor & Francis.

Google Scholar

Zerback, T., Töpfl, F., and Knöpfle, M. (2021). The disconcerting potential of online disinformation: persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media Soc. 23, 1080–1098. doi: 10.1177/1461444820908530

Crossref Full Text | Google Scholar

Zhou, Y., and Shen, L. (2022). Confirmation bias and the persistence of misinformation on climate change. Commun. Res. 49, 500–523. doi: 10.1177/00936502211028049

Crossref Full Text | Google Scholar

Keywords: misinformation persistence, correction, motivational bias, cognitive fallacies, psychological inoculation

Citation: Zhou Y and Shen L (2024) Processing of misinformation as motivational and cognitive biases. Front. Psychol. 15:1430953. doi: 10.3389/fpsyg.2024.1430953

Received: 10 May 2024; Accepted: 16 August 2024;
Published: 30 August 2024.

Edited by:

Claude H. Miller, University of Oklahoma, United States

Reviewed by:

Hillary Shulman, The Ohio State University, United States
Donald Braman, George Washington University, United States

Copyright © 2024 Zhou and Shen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lijiang Shen, bHVzMzJAcHN1LmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.