Skip to main content

CONCEPTUAL ANALYSIS article

Front. Sociol., 11 July 2023
Sec. Sociological Theory
This article is part of the Research Topic Theories, Methods, Practices, and Fields of Digital Social Research View all 25 articles

Overcome the fragmentation in online propaganda literature: the role of cultural and cognitive sociology

  • 1Interdisciplinary Centre for Gender Studies (ICFG), University of Bern, Bern, Switzerland
  • 2Department of Sociology, University of Trento, Trento, Italy

Evidence concerning the proliferation of propaganda on social media has renewed scientific interest in persuasive communication practices, resulting in a thriving yet quite disconnected scholarship. This fragmentation poses a significant challenge, as the absence of a structured and comprehensive organization of this extensive literature hampers the interpretation of findings, thus jeopardizing the understanding of online propaganda functioning. To address this fragmentation, I propose a systematization approach that involves utilizing Druckman's Generalizing Persuasion Framework as a unified interpretative tool to organize this scholarly work. By means of this approach, it is possible to systematically identify the various strands within the field, detect their respective shortcomings, and formulate new strategies to bridge these research strands and advance our knowledge of how online propaganda operates. I conclude by arguing that these strategies should involve the sociocultural perspectives offered by cognitive and cultural sociology, as these provide important insights and research tools to disentangle and evaluate the role played by supra-individual factors in the production, distribution, consumption, and evaluation of online propaganda.

1. Introduction

The emergence of the Internet has revolutionized the way political information is created, disseminated, and consumed, particularly through Social Networking Platforms (SNPs), which have become a crucial arena for political communication. Given their centrality, the circulation on such platforms of political content that appears dubious in regard to its factuality, approval rates, and political motive has raised widespread concern. Numerous investigations have, indeed, uncovered the widespread presence of political content on SNPs that masquerades as reliable, neutral information—though it aims to discredit opposing viewpoints rather than serving an informative purpose (Tucker et al., 2018)—and it is frequently amplified through the systematic use of automated tools and impersonation of accounts (Woolley and Howard, 2018).

Such findings have urged scholars to extensively address this alarming phenomenon—which in this paper is referred to as online propaganda—leading to a thriving yet quite disconnected body of literature. This fragmentation can be attributed to the interdisciplinary nature of the research itself as well as to the complexity and multidimensionality of the phenomenon it addresses. In fact, scholars investigating online propaganda not only employ various theoretical and methodological approaches, but they also focus on different dimensions of the phenomenon, contributing to the disconnected nature of the literature. This poses a significant challenge, as the absence of a structured and comprehensive organization of the extensive literature generated thus far hampers the interpretation and systematization of findings in relation to previous research within this field. As a result, the overall comprehension of the phenomenon is jeopardized.

To address this fragmentation, I propose a novel approach to analyze and structure the existing body of literature on online propaganda. This approach involves utilizing Druckman's (2022) Generalizing Persuasion (GP) Framework as a unified interpretative framework to organize this scholarly work. By adopting this approach, I argue that it is possible to systematically identify the various strands within this field of study and recognize their respective shortcomings. Additionally, employing such strategy enables the formulation of new strategies to bridge these research strands and advance our comprehension of how online propaganda operates.

Hence, this paper is organized as follows: firstly, it presents a theoretical definition of the phenomenon under investigation and introduces the GP Framework as a mean to consolidate the fragmented literature; secondly, it examines the different strands within the literature using this framework; finally, it addresses the limitations of each strand and suggests approaches to overcome them while also establishing connections between the various strands. In particular, arguments are made in favor of incorporating sociocultural perspectives in both the theoretical conceptualization and empirical evaluation of online propaganda functioning as an effective method to bridge the literature.

2. Defining online propaganda

Propaganda is still a much-debated term in the literature, which is often applied in very diverse contexts with different meanings and implications. The conceptual and, thus, terminological entropy characterizing the literature on propaganda is related to the multidimensional nature of the phenomenon—Do we intend propaganda as a communication practice, a public opinion issue, or a “more general” political phenomenon?—as well as its complex relation with persuasion—Is one a subcategory of the other or are there structural differences between the two?

Drawing from Jowett and O'donnell (2018), in this paper propaganda is intended as a specific class of communication that involves two actors—a sender and a receiver—who, through a process of symbolic interaction, use information in an attempt to share meaning. Although propaganda has important similarities with persuasion—highlighted by the presence of persuasive communication elements—it differs from the latter in one crucial aspect. While persuasive and propagandist communication share the same aim—i.e., influence a targeted audience into voluntarily adopting a point of view and/or a behavior favoring the sender's interest—the former is overt about its persuasive intentions, while the latter is not (Jowett and O'donnell, 2018). Indeed, propaganda wants to pass as informative communication—whose only purpose is to create mutual understanding of data, concepts, and ideas that are considered to be accurate and fact-based. Therefore, the major difference between persuaders and propagandists lies in the fact that the former do not want to appear as informers, while the latter do (Jowett and O'donnell, 2018). Building on these considerations, propaganda can be therefore defined as a type of communication with a concealed persuasive aim, which is pursued in a systematic and organized way—i.e., with a clear and deliberate political strategy (Jowett and O'donnell, 2018).

To depict online propaganda, however, a further specification is required. Indeed, the Internet—and, in particular, SNPs—have profoundly altered “classical” top-down communication models, where sender-receiver roles were usually static and unidirectional (Wanless and Berk, 2021). In fact, these platforms have transformed their users into productive consumers—or prosumers (Fuchs, 2014)—who, rather than being passively exposed to information, have an active role in its production and circulation. This aspect is of crucial importance for online propaganda, as it makes the traditional distinction between “propagandist” and “targeted audience” blurred, transforming social media users into potentially active campaigners (Wanless and Berk, 2021). This is why, compared to its traditional form, online propaganda is participatory, as it tries to co-opt its targeted audience to actively engage in the spread of its messages (Wanless and Berk, 2021).

This conception of online propaganda is far from being definitive. Nonetheless, it is particularly enlightening as it stresses a fundamental—and yet often overlooked—aspect of this phenomenon, namely the fact that, as a communication practice, online propaganda necessarily involves the interaction between two actors: a sender and a recipient.

Despite this relational aspect, studies addressing online propaganda often tend to focus on either one or the other actor, adopting different theoretical and empirical approaches that are rarely in dialogue with each other. This has produced a thriving yet quite disconnected scholarship, which makes its systematization particularly arduous. The lack of dialogue between such approaches—and their findings—hinders a coherent advancement of the literature, potentially jeopardizing a comprehensive understanding of the phenomenon. To overcome this issue, the following section proposes a systematization of the current scholarship under a common framework—the Generalizing Persuasion (GP) Framework by Druckman (2022). The intent is to evaluate different strands of literature on the same conceptual basis to clearly determine the contribution of each approach as well as its limitations, and later identify potential strategies to overcome existing shortcomings and bridge the two approaches.

3. Systematizing a disconnected literature under the GP Framework

The GP Framework (Druckman, 2022) was developed to offer a conceptual tool to systematize and draw generalization from the vast, but highly fragmented, scholarship on persuasion. It has been designed to highlight the sources of variations considered pivotal for the understanding of the phenomenon, in order to easily identify the connection between different studies addressing persuasion. Given the incorporation of elements of persuasive communication into propagandistic communication and the—though covert—persuasive aim of the latter (Jowett and O'donnell, 2018), the deployment of the GP Framework is considered particularly suitable for the systematization task that has been set.

This framework identifies four core elements (or dimensions) in the study of persuasive communication: actors, treatments, outcomes, and settings. Each of them encompasses different components1 that serve to better specify the aspects addressed by each dimension. Druckman makes clear that by no means does the GP Framework requires researchers to account for all the dimensions (and all their respective components) identified when investigating persuasive communication, but it rather urges them to be explicit about which elements they study and how, so that dialogue within the literature can emerge and potential contradictions can be overcome.

When assessing the literature on online propaganda by means of the GP Framework, it appears evident that the discontinuity previously mentioned is not only due to different theoretical and methodological traditions, but it is also related to different research goals, which are reflected in the dimensions addressed—and neglected—by the investigations. When accounting for these, two major strands of the literature emerge.

The first—which we could call “supply-side”—is mostly preoccupied with the question of how people get exposed to propaganda content on SNPs, thus focusing on the process and the motivations behind the production, supply, and availability of propaganda—including patterns of exposure to and engagement with such political material (Guess and Lyons, 2020). As such, investigations belonging to this strand tend to study a specific component of the “actors” dimension, namely the “speaker(s)”, completely neglecting the “receiver(s)” component and, consequently, the “outcome” dimension all together2.

Conversely, the second—which can be labeled “demand-side”—engages with the study of the effects that exposure to online propaganda produces on its targeted audience, thus concentrating on the mechanisms underlying the persuasion process (Adam-Troian, 2022). Studies interested in this tend to focus on the “receiver(s)” component of the “actors” dimension, overlooking its counterpart (i.e., “speaker(s)”) but thoroughly addressing the “outcome” dimension instead.

In the following sections, these two approaches will be further examined by means of the GP Framework, with the aim of better positioning their contributions within the literature, addressing their limitations, and discussing how these—if not accounted for—may represent important shortcomings for the sustainable development of the literature.

4. The supply-side approach: propagandist, their aims and their goals

Scholars adopting a supply-side approach to the study of online propaganda are usually preoccupied with the identification and classification of the political actors involved in this communication practice, the aims underlying their actions, and the strategies implemented to accomplish them—all while considering the specific features of the different political contexts in which such a phenomenon arise3. As such, these investigations mostly focus on three dimensions of the GP Framework, namely “actors”, “treatments”, and “settings”, though they do not address all the components these dimensions encompass.

Most notably, they devote most of the attention to the “speaker(s)” components of the “actors” dimension, concentrating research efforts on the identification of the “type(s)” of speakers involved in the production, distribution, and proliferation of propaganda on SNPs, as well as the “motivations” behind their actions. Indeed, this stand of the literature—whose research agenda has been largely stimulated by the Computational Propaganda Project at the Oxford Internet Institute (2023) and the Observatory on Social Media at the Indiana University (OSoMe, 2023)—is mostly preoccupied with uncovering the identity, the location, and the motives of online propagandists, as well as their organizational practices and methods of dissemination (Guess and Lyons, 2020). This work—often complemented by reports from non-academic sources, such as journalists, intelligence agencies, and SNPs themselves—has provided important insights on the characteristics of both online propagandists and the proliferation patterns of their messages.

Studies on the production and supply of online propaganda have uncovered how this is a widespread communication practice implemented by numerous actors around the Globe, who are very diverse in terms of identity, organizational structure, and motives (Woolley and Howard, 2017, 2018; Bradshaw and Howard, 2018). Ranging from state and intelligence agencies (Bastos and Farkas, 2019; Dawson and Innes, 2019) to teenage groups (Kirby, 2016) and political extremists (Marwick and Lewis, 2017), these “speakers” have very different aims for propagating their messages. Some are driven by economic reasons4, while others have political aims encompassing social control—in the case of authoritarian states—issue-salience alterations and framing for electoral goals—in democratic regimes (Woolley and Howard, 2017). In terms of organizational structure, these actors also vary greatly, with investigations showing significant differences in terms of capacity, coordination, and resources (Bradshaw and Howard, 2018).

The supply-side literature on online propaganda has also provided important insights into the “treatments” and “settings” dimensions of the phenomenon, by exploring production and dissemination strategies employed by propagandists on SNPs. By mapping propaganda networks (e.g., Ferrara et al., 2016; Benkler et al., 2018; Vosoughi et al., 2018; Ahmed et al., 2020) as well as the type of messages circulating through them (e.g., Howard and Kollanyi, 2016; Rosińska, 2021), researchers have developed a detailed depiction of the online ecosystems where this kind of material proliferates, identifying diffusion patterns as well as techniques adopted for maximizing message propagation. On this latter, they have uncovered the widespread use of automation (often combined with human curation) to enhance the circulation of specific political stances, as documented in numerous studies on political bots5 (e.g., McKelvey and Dubois, 2017; Woolley and Howard, 2018; Ferrara, 2020). It is interesting to notice that automation, in addition to enhancing dissemination, also serves the purpose to maintain propagandists anonymous throughout the communication process—a crucial aspect in ensuring that also their intentions remain concealed.

Despite a preponderant focus on “speakers”, “settings”, and “treatments”, the supply-side strand also includes contributions that have tried to address the “outcome” dimension of the phenomenon (e.g., Forelle et al., 2015; Woolley and Howard, 2016; Bradshaw and Howard, 2018). However, these studies remain speculative in nature, as authors never empirically assess their postulations on online propaganda effects—which they identify as the “manipulation of public opinion” (Woolley and Howard, 2018). In fact, they do develop some theoretical considerations regarding the macro-processes through which the manipulative power of online propaganda unfolds6, but such considerations account only for the alteration of the political narratives on SNPs, failing to explain how the alteration translates into the manipulation of public opinion (Camargo and Simon, 2022). Authors do recognize this issue, attributing its causes to the difficulty of empirically establishing causal claims on effects that transcend the online realm7 (Woolley and Howard, 2018). However, I would argue that the problem is first and foremost theoretical and that the empirical challenges identified by these authors are a consequence of it.

As a type of communication, online propaganda involves the interaction between two actors: “speakers” and “receivers”. It is in the unfolding of this interaction—and not in the actions of one or the other actor alone—that the mechanisms determining the persuasive result should be sought. It follows that, if “receivers” and their response to online propaganda are systematically neglected8 in the postulation of such mechanisms, the assessment of the persuasive outcome becomes virtually impossible.

Therefore, to go beyond postulations that seem to simply assume an automatic link (and not a testable mechanisms) between exposition to propaganda material and a voluntary change of attitudes and/or behavior, direct engagement with propaganda recipients must be envisaged when discussing online propaganda “outcomes”.

5. The demand-side approach: exploring online propaganda effects on its targets

Scholars adopting a “demand-side” approach to the study of online propaganda are interested in exploring the effects this communication practice generates in its targeted audience. As such, their investigations tend to focus on the persuasion process(es) triggered by these political messages, with the aim to identify the micro-mechanisms underpinning evaluations and behavior of those exposed to them. Given these research objectives, contributions belonging to this strand of the literature touch on all four dimensions of the GP Framework (i.e., “actors”, “treatments”, “outcomes”, and “settings”), albeit focusing on only some of the components that fall under these dimensions. Indeed, the demand-side scholarship is interested in assessing online propaganda “outcomes” by identifying factors that influence them—namely, “settings”, “treatments”, and characteristics of “receiver(s)”—and explaining their underlying mechanisms—i.e., the “process” through which information is assessed and decisions are formed.

Conversely from their supply-side colleagues, demand-side scholars are not interested in investigating how social media users end up getting exposed to online propaganda, but they are rather concerned with the cognitive mechanisms that are activated once exposure has occurred. It follows that their primary focus is “receivers” and the way they process and respond to online propaganda. Therefore, the “outcomes” they are interested in are those related specifically to these actors, that is, the observable voluntary changes in the “attitudes” and “behaviors” of the recipients.

As previously discussed, it is extremely complex to causally link exposure to online propaganda with offline behavior (e.g., voting, demonstrating, rioting), as “the effects [of online political persuasive content] on electoral outcomes or other behavior have yet to be reliably detected” (Guess and Lyons, 2020, p. 22). Therefore, the outcomes studied usually concern receivers' evaluation of online propaganda material and the (online) response such material elicit. On the one hand, this means assessing users' perceptions of information credibility9 (e.g., Castillo et al., 2011; Metzger and Flanagin, 2013; Braddock and Morrison, 2020; Wittenberg and Berinsky, 2020), reliability10 (e.g., Diviani et al., 2015), and accuracy11 (e.g., Lucassen and Schraagen, 2011; Pennycook et al., 2018). On the other hand, it means investigating online engagement—which is usually operationalized as sharing behavior—prompted by propaganda content (Islam et al., 2020; Pennycook and Rand, 2021; Liang et al., 2022; Song et al., 2023).

To understand whether and how these evaluations and behaviors are altered by online propaganda, researchers have identified and investigated the influence exerted by three main factors: the design of online propaganda messages (i.e., the “treatments”), the features of the information environments in which these messages are circulated and processed (i.e., the “settings”), and the characteristics of those exposed to such messages (i.e., the “receivers”).

Studies addressing the first factor investigate how different message features that are commonly employed to evaluate information validity and salience—such as source (e.g., Ecker et al., 2022), endorsement (e.g., Metzger et al., 2010; Metzger and Flanagin, 2013), emotional salience (e.g., Ali and Zain-ul-abdin, 2021; Song et al., 2023), popularity (e.g., Haim et al., 2018), stereotypes (e.g., Lombardi Vallauri, 2021), and topic (Schaewitz et al., 2020)—are purposefully manipulated and used by online propagandists to alter the attitudes and behaviors of their targeted audience.

Investigations focusing on the second factor explore how the very design of SNPs affects the way information is accessed, processed, and finally evaluated. Indeed, such environments are characterized by an overabundance of informational stimuli constantly competing for users' attention—a condition that impairs the ability to process and evaluate information analytically (Pittman and Haley, 2023) and that, as such, can be exploited by online propagandists to enhance the circulation of their messages (e.g., Islam et al., 2020; Apuke and Omar, 2021; Sanderson et al., 2022).

Finally, when exploring the role recipients' characteristics have in affecting online propaganda persuasive outcome, demand-side scholars tend to concentrate on those aspects that are in direct relation to the piece of information evaluated, namely receivers' “prior attitudes” and “evaluative beliefs” toward the issue addressed by propaganda messages (e.g., Ma et al., 2019; Hameleers et al., 2020; Rhodes, 2022), as well as the “motivation” (e.g., Van Bavel and Pereira, 2018; Stanley et al., 2020) and “effort” (e.g., Pennycook and Rand, 2019) these actors put into processing the information they are exposed to.

To provide compelling explanations on how these factors affect the evaluation process of online propaganda material and determine a voluntary change of attitudes and/or behavior in the targeted audience, researchers often resort to the concept of heuristic reasoning. This draws from the Dual Process Models of Cognition—a theory that envisages the existence of two distinct but interdependent systems that regulate the thinking process (Gilovich et al., 2002; Vaisey, 2009; Kahneman, 2011; Lizardo et al., 2016; Bryanov and Vziatysheva, 2021). Despite the ongoing debate on the specific characteristics and relations between these two systems (Cerulo et al., 2021), scholars agree on their basic functioning: one system is intuitive and does not require controlled attention during information processing, while the other is deliberate and necessitates more cognitive resources to perform mental tasks (Gilovich et al., 2002).

Because of its speed and (low) cognitive-energy demands, the former system is often employed in situations where reasoning capacity is impaired—such as in the case of high informational load (Ayres and van Gog, 2009)—or in cases of high uncertainty—when fast solutions are preferred (Kahneman et al., 1982). Though satisfactory for reaching immediate goals, this system is subject to systematic bias, as it relies on the uncritical application of preexisting knowledge structures (i.e., heuristics) rather than an in-depth analysis of the information received (DiMaggio, 1997). Therefore, the reliance on this system when processing information can lead to flawed evaluations and behavior (Gilovich et al., 2002).

Given its premises, the Dual Process Model—and, in particular, the concept of heuristic processing—has been largely employed by demand-side scholars to theoretically postulate and then empirically explore the cognitive mechanisms underpinning the assessment of online propaganda. Regardless of the specific factor (treatments, settings, or receivers) or outcome (evaluations or sharing behavior) investigated, empirical discoveries offered by this strand of the literature indicate that heuristic reasoning plays an important role in the persuasion process prompted by online propaganda (Bryanov and Vziatysheva, 2021).

Though compelling, these results are affected by some important limitations that mostly concern the way these cognitive mechanisms are conceived and, consequently, assessed. Overall, studies building on the Dual Process Model framework tend to have a universal approach to cognition, meaning that they conceive cognitive processes as common procedures shared by all individuals regardless of their socioeconomic, cultural, ethnic, or political background (Lamont et al., 2017; Kuo and Marwick, 2021). However, such factors have been demonstrated to play an important role in shaping these processes, as they make cognitive referents more accessible or prominent than others during information assessment and decision-making (Bruch and Feinberg, 2017; Lamont et al., 2017).

Neglecting these elements could limit result interpretation and, thus, potentially hinder the overall understanding of the mechanisms underlying complex social phenomena—especially if they involve communication practices, as in the case of online propaganda. Indeed, numerous studies have highlighted how communication is affected by sociocultural factors, that do not only influence the way a message is processed and evaluated, but also how that very message is designed and conveyed (e.g., Servaes, 1989; Lareau, 2011; Samovar et al., 2016).

Notwithstanding such arguments, demand-side investigations of online propaganda impact tend to neglect sociocultural differences in both treatment design (Kuo and Marwick, 2021) and effect assessment (Guess and Lyons, 2020). Engaging with sociocultural factors would, therefore, help this strand of the literature to relax the homogeneity assumptions that currently characterize its investigations—as these are rarely justifiable when compared to real-world scenarios characterized by high intrinsic diversity, both in terms of propaganda material and recipients' characteristics.

6. Discussion: bridging online propaganda literature

Employing the GP framework as a common frame of reference to analyze the disconnected body of literature on online propaganda offers a dual advantage. Firstly, it facilitates the identification of variations among different strands of the literature in terms of the specific dimensions and components addressed, thus supporting the development of a structured overview of the state-of-the-art. Secondly, it enables the placement of gaps and limitations of each strand within these dimensions and components.

In this sense, the adoption of a common framework proves beneficial not only in clearly identifying the shortcomings of both supply- and demand-side strands, but also in exploring avenues to bridge such strands. Indeed, this systematization has highlighted how this bridging process can be streamlined: on the one hand, by comprehending how the identified differences can be translated into valuable insights that each strand can employ to overcome its limitations; on the other hand, by exploring how the contributions offered by other research programs that focus on the interaction between cognitive and sociocultural factors can provide additional support to address such shortcomings. This integration of diverse perspectives would reduce fragmentation by harmonizing currently disjointed theoretical conceptualizations and empirical findings. Moreover, it would advance the research agenda on online propaganda by explicitly considering the interplay between its cognitive and social components.

6.1. Getting the dialogue going: how each strand of literature on online propaganda can contribute to the development of the other

Engaging in a constructive dialogue between the supply-side and demand-side strands of online propaganda literature can be advantageous for both, as each strand has the potential to offer valuable research tools to the other, aiding in the resolution of certain limitations that impact their respective areas.

As previously discussed, the supply-side strand is preoccupied with the socio-political factors leading to the emergence and endurance of online propaganda. To this aim, it focuses its investigations on the political actors involved in the production and dissemination of such political content and the strategies they employ to amplify its circulation. Notwithstanding its compelling findings, this strand remains primarily descriptive, with a disproportioned focus on the “speakers” component and some untested assumptions on the persuasive outcomes this type of communication generates (Jungherr et al., 2020; Camargo and Simon, 2022).

To better understand the persuasive power of online propaganda and its resulting effects, it is essential for this strand to consider the “receivers” component and the cognitive mechanisms that underpin their information processing and evaluation. This does not imply replicating or fully incorporating the research conducted by the demand-side strand, but rather acknowledging its existence and drawing from it when discussing online propaganda outcomes. By doing so, these contributions can be better positioned within the existing literature and in relation to the demand-side strand. Additionally, this would also enable a reevaluation of the (potential) social harm associated with online propaganda, as some have raised concerns about the validity of claims based solely on computationally intensive methods applied to social media data, without reference to real-world populations (Rauchfleisch and Kaiser, 2020; Camargo and Simon, 2022).

Conversely, the demand-side strand delves into the persuasion processes triggered by online propaganda, assessing its impact on users' evaluations and behavior. As such, it investigates how individuals process and respond to these persuasive stimuli to identify the cognitive mechanisms underlying such reactions.

Though rich in insightful findings on the outcomes produced by this kind of communication, this account as well is affected by some important limitations. By primarily focusing on the “recipients” component, investigations belonging to the demand-side strand tend to systematically overlook the socio-political context in which propaganda is created and disseminated when assessing its impact on the targeted audience. Furthermore, such assessments often adopt a universalistic perspective on cognition, assuming that cognitive processes underlying the evaluation of political content are universally applicable—an assumption that completely neglects the impact sociocultural factors have on cognition (Lamont et al., 2017) and, thus, on the evaluation of online propaganda content (Rampersad and Althiyabi, 2020).

However, drawing from the insights offered by the supply-side strand could help to partially mitigate such shortcomings and achieve a better interpretation of experimental results. The descriptive findings provided by the supply-side literature regarding, on the one hand, the actors involved in online propaganda production (including the political settings in which they operate) and, on the other hand, the consumption and diffusion patterns of propaganda on SNPs, can help contextualize the effects of exposure to online propaganda. Indeed, by drawing from these contributions, it would be possible to delve into the specificities of the socio-political context in which online propaganda is produced and disseminated, facilitating the development of more consistent stimuli tailored to specific subpopulations under investigation. Moreover, knowledge of the circulation and consumption patterns of online propaganda can be extremely useful for verifying the intuitions guiding the experimental assessment of the effect this communication practice has on users' judgments and behavior, as these real-world observations can be used as ground rules to compare (and substantiate) experimental findings with.

Overall, considering how each strand of the literature can build on the other to overcome its limitations is the first important step toward bridging online propaganda literature. However, this would still not be sufficient to address all the shortcomings affecting both strands, particularly those related to the neglect of sociocultural factors when investigating the functioning and outcomes of online propaganda. As previously mentioned, these factors profoundly affect human communication practices, including the way people process and evaluate messages (Martin and Nakayama, 1999). Therefore, they should be acknowledged when investigating online propaganda, as they can help interpret and contextualize findings, and provide a more comprehensive understanding of the underlying mechanisms that produce them.

6.2. How cultural and cognitive sociology can further bridge and enhance online propaganda scholarship

To address these remaining shortcomings, I argue that drawing from sociology—in particular, from its cultural and cognitive branches—would be beneficial, as this literature provides important insights and research tools to disentangle and evaluate the role played by supra-individual factors in the production, distribution, consumption, and evaluation of online propaganda. Indeed, utilizing the contributions provided by this scholarship would help in better contextualizing the settings in which online propaganda is produced and supplied, as well as advancing the understanding of the mechanisms underlying online propaganda outcomes.

By exploring and comparing the role that history, language, cultural system, and social differentiation play in the production and distribution of online propaganda across different contexts, it would be possible to evaluate how these factors influence and differentiate online propaganda dynamics. This contextualization could potentially aid the interpretation of existing findings that show significant variations in terms of speakers involved and persuasion strategies employed across settings (Bradshaw and Howard, 2018), since salience, credibility, and effectiveness of certain political actors or messages are likely to (also) depend on the sociocultural specificities characterizing the society in which this form of communication takes place (Rampersad and Althiyabi, 2020).

Furthermore, an improved understanding of the outcomes and underlying mechanisms of online propaganda can be achieved by examining how sociocultural factors interact with cognition and mutually shape each other, thereby influencing the evaluation of online propaganda messages. Contributions from cognitive and cultural sociology have, indeed, demonstrated that the cognitive mechanisms involved in information processing and decision-making are not universally applicable, as they are influenced by cultural repertoires—namely “the available schemas, frames, narratives, scripts and boundaries that actors draw on in social situations” (Lamont et al., 2017, p. 866). These repertoires are distributed unevenly among individuals who share the same national membership due to their transmission and diffusion by specific intermediaries—such as religious leaders, political parties, and media outlets—which may vary in prominence across different social groups (Rosen, 2017; Rucks-Ahidiana, 2022). Social differentiation thus plays a crucial role in determining the accessibility of cultural repertoires to different groups, as differently structured social environments enable certain cultural references to be more readily available to some individuals than others (Lamont et al., 2017). Consequently, this has an impact on how messages are processed, interpreted, and evaluated—a crucial insight that should be taken into account by scholars who are interested in identifying the different components underpinning the evaluation of online propaganda.

Therefore, engaging with sociocultural factors when investigating the functioning and outcomes of online propaganda would have the dual advantage of helping overcome the limitations affecting both research strands and further bridging the overall literature on online propaganda.

By considering sociocultural factors, researchers can move beyond a descriptive—and often Anglocentric—understanding of online propaganda production and distribution, reaching more comprehensive conclusions regarding the prominence and significance of specific platforms, actors, and diffusion patterns across different national and subnational context (Camargo and Simon, 2022). Supply-side scholars can, therefore, enhance their findings by incorporating these factors into their investigations, thus also responding to recent calls numerous researchers have made to ground online propaganda studies in history, society, culture, and politics to avoid neglecting the role race, ethnicity, language, colonial legacy, gender, and class have in this phenomenon (e.g., Siegel, 2020; Kreiss, 2021; Kuo and Marwick, 2021; Nguỹen et al., 2022).

To do so, new theoretical and empirical analyses should focus on how social differentiation influences the dynamics of persuasive communication. For instance, when examining consumption and diffusion patterns of online propaganda on SNPs, researchers could explore whether the most prominent accounts in these networks exhibit specific characteristics that symbolize their belonging to a trusted and authoritative group within the socio-cultural context. Similarly, they could expand upon insights from political bot research by investigating whether the display of cultural-specific traits and codes, such as jargon, religious symbols, or impersonation of community members, aids automated accounts in spreading propaganda content on social networking platforms and building trust among their targeted audience.

Acknowledging the influence of sociocultural factors would also enable researchers to move beyond a universal and static perspective of the cognitive mechanisms involved in the evaluation of online propaganda. By recognizing that cultural repertoires shape cognition and that their distribution varies among diverse social groups, demand-side scholars can achieve a more comprehensive understanding of how sociocultural factors shape cognitive processes and subsequently impact attitudinal and behavioral outcomes produced by this form of communication. Indeed, including sociocultural factors in the formulation and evaluation of the mechanisms underpinning online propaganda effects would allow researchers to relax the homogeneity assumption previously discussed and provide stronger causal explanations for the variations observed among recipients in terms of their response to online propaganda. To achieve such an objective, researcher could, for example, collect sociocultural information along with the “more classical” demographics during experimental assessments of online propaganda's impact on users' evaluations and behavior. This would allow for the estimation of heterogeneous treatment effects to assess whether individuals with similar sociocultural backgrounds exhibit consistent responses to specific propaganda stimuli.

By employing this approach, it would be possible to gain a deeper understanding of whether susceptibility to online propaganda persuasion strategies is influenced by supra-individual characteristics. This would provide scholars with new lens through which to explore and analyze patterns of vulnerability, thus further developing our understanding of online propaganda impact on individuals.

The acknowledgment and inclusion of sociocultural factors when investigating online propaganda functioning would further bridge the supply- and demand-side strands of literature. Indeed, adopting a consistent research approach that incorporates sociocultural factors in both conceptualizations and empirical assessments would facilitate the dialogue between the two strands, fostering meaningful knowledge exchange about this type of communication.

Moreover, adopting a sociocultural approach would serve as a cautionary measure against generalized—and, sometimes, oversimplified—policy proposals aimed at countering online propaganda. Whether these proposals target the persuasive influence of propaganda (e.g., debiasing treatments) or seek to regulate its dissemination (e.g., SNPs and content regulations), I argue that considering the sociocultural specificity of different target audiences is crucial.

For instance, scholars developing debiasing treatments that target cognition (e.g., Dai et al., 2021; Lewandowsky and van der Linden, 2021) would benefit from the inclusion of sociocultural factors in their analyses as this would allow them to develop more effective debiasing stimuli that account for the diversity characterizing online propaganda audience. Indeed, by taking into account that individuals' sociocultural background affects the availability of cultural repertoires and, thus, the cognitive mechanisms underlying information-processing and decision-making, researchers would be able to design tailored debiasing treatments that work best for specific socio-cultural groups (Sya'bandari et al., 2022).

Similarly, regulations that prescribe content and behavior limitations on SNPs12—and envisage social media companies' responsibilities in implementing such restrains—should be designed by considering how sociocultural factors interact with the production and consumption of online propaganda. On the one hand, this means acknowledging the specific use that political actors and citizens make of these platforms in different contexts, by identifying and assessing, for example, the most prominent platforms for political communication, the extent to which automation is employed, the type of actors involved in dubious communication activities, and the characteristics of diffusion and consumption patterns. On the other hand, it means considering the broader characteristics of the online media ecosystem, including how its ownership structure and the power dynamics that emerge from its interaction with the offline social world influence the production and consumption of such political content (Fuchs, 2019).

Overall, integrating the demand- and supply-side strands of research on online propaganda would enhance our knowledge of how propaganda impacts individuals' cognitive processes and behavior within specific socio-political contexts. By incorporating insights from both perspectives, researchers can conduct more rigorous and comprehensive analyses, making valuable contributions to the field. Moreover, by drawing on the insights offered by cultural and cognitive sociology, an improved understanding of the outcomes and underlying mechanisms of online propaganda can be achieved. Embracing a cultural perspective in the study of online propaganda would contribute to a more holistic understanding of this communication phenomenon. Indeed, it would allow scholars to explore factors that account for significant variations among diverse social groups, which often stem from existing patterns of inequality that result in uneven distributions of material and cultural resources.

Author contributions

The author confirms sole responsibility for the study conception and design, analysis and interpretation of bibliographic sources, and manuscript preparation.

Funding

The study was funded by Open Access Funding by University of Bern.

Acknowledgments

I am deeply grateful to Michael (Mike) Biggar for his insightful comments on an earlier draft of this paper.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^Here is an overview of all the components in relation to their respective dimensions: actors (speakers and receivers), treatments (topics, content and media), outcomes (attitudes, behaviors, emotions and identities) and settings (competition, space, time, process, and culture) (Druckman, 2022).

2. ^As outlined in the following section, some authors belonging to the supply-side strand of the literature have discussed online propaganda “receivers” and “outcomes” (e.g., Woolley and Howard, 2018). However, they have done so in a descriptive way, by limiting their considerations to the accounts engaging the most with this kind of political content or by speculating on its effects without formulating empirically testable propositions. For these reasons, these components/dimensions are not considered to be the core focus of this approach.

3. ^It is important to note that studies exploring how the online media system and its ownership structure affect the production, circulation, and consumption of (political) information on SNPs (e.g., Fuchs, 2018; Marmura, 2020; Arayankalam and Krishnan, 2021) are not considered to be part of this specific body of literature. Inasmuch as they explore the reproduction of power structures in online environments and their impact on news media and communication at large, these contributions discuss online propaganda as a byproduct of such dynamics (like echo chambers, polarized conversations, etc.) rather than directly addressing its function and role as a political practice. For this reason, they are not addressed in this paper as they exceed the scope of this analysis.

4. ^As in the famous case of the Macedonian teenagers who created and disseminated pro-Trump material for profit in 2016 (Subramanian, 2017).

5. ^Defined as “user account[s] that ha[ve] been equipped with the features or software to automate interaction with other user accounts about politics” (Howard et al., 2018, p. 85).

6. ^Building on the Agenda-Setting Theory (McCombs and Shaw, 1972; McCombs et al., 1997), these authors claim that the systematic use of computational tools allows political actors to bypass traditional gatekeepers of information and directly alter the salience of specific topics in the online public discourse. In this way, they drive media coverage toward political issues and narratives considered more advantageous for their faction, thereby shaping the political debate and influencing the electorate (Woolley and Howard, 2018).

7. ^Woolley and Howard (2018, p. 244) indeed maintain that “making a causal claim from social media use to citizen engagement, trust in institutions, or voter sophistication is proving difficult to do even in countries for which there are significant amounts of data”.

8. ^Some contributions belonging to the supply-side strand have provided interesting insights into the type of users who are more likely to be exposed to and engage with online propaganda on SNPs, by exploring the characteristics and the behavior of their social media accounts (Shao et al., 2017; Guess et al., 2019). However, such information is mostly descriptive, as it is based on online available data that, as such, does not explore the decision-making process underlying the displayed behavior.

9. ^Intended as the extent to which there are reasonable grounds for believing information conveyed (Sundar, 2008).

10. ^Understood as the quality of being trustable because of previous experience indicating so (Gauld and Williams, 2009).

11. ^Intended as the ability of information-providers to convey correct information (Tate, 2009).

12. ^For an overview of governance responses in the European system see Saurwein and Spencer-Smith (2020).

References

Adam-Troian, J. (2022). The fitness-validation model of propagandist persuasion and the 5Ds of propaganda. PsyArXiv. doi: 10.31234/osf.io/52g63

CrossRef Full Text | Google Scholar

Ahmed, W., Vidal-Alaball, J., Downing, J., and Seguí, F. L. (2020). COVID-19 and the 5G conspiracy theory: social network analysis of Twitter data. J. Med. int. Res. 22, e19458. doi: 10.2196/19458

PubMed Abstract | CrossRef Full Text | Google Scholar

Ali, K., and Zain-ul-abdin, K. (2021). Post-truth propaganda: heuristic processing of political fake news on Facebook during the 2016 US presidential election. J. Appl. Commun. Res. 49, 109–128. doi: 10.1080/00909882.2020.1847311

CrossRef Full Text | Google Scholar

Apuke, O. D., and Omar, B. (2021). Social media affordances and information abundance: enabling fake news sharing during the COVID-19 health crisis. Health Inform. J. 27, 14604582211021470. doi: 10.1177/14604582211021470

PubMed Abstract | CrossRef Full Text | Google Scholar

Arayankalam, J., and Krishnan, S. (2021). Relating foreign disinformation through social media, domestic online media fractionalization, government's control over cyberspace, and social media-induced offline violence: insights from the agenda-building theoretical perspective. Technol. Forecast. Soc. Chan. 166, 120661. doi: 10.1016/j.techfore.2021.120661

CrossRef Full Text | Google Scholar

Ayres, P., and van Gog, T. (2009). State of the art research into cognitive load theory. Comp. Hum. Behav. 25, 253–257. doi: 10.1016/j.chb.2008.12.007

CrossRef Full Text | Google Scholar

Bastos, M., and Farkas, J. (2019). “Donald Trump is my president!”: the internet research agency propaganda machine. Soc. Media Soc. 5, 205630511986546. doi: 10.1177/2056305119865466

CrossRef Full Text | Google Scholar

Benkler, Y., Faris, R., and Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press.

Google Scholar

Braddock, K., and Morrison, J. F. (2020). Cultivating trust and perceptions of source credibility in online counternarratives intended to reduce support for terrorism. Stud. Confl. Terror. 43, 468–492. doi: 10.1080/1057610X.2018.1452728

CrossRef Full Text | Google Scholar

Bradshaw, S., and Howard, P. N. (2018). Challenging truth and trust: a global inventory of organized social media manipulation. Computat. Propaganda Proj. 1, 1–26. Available online at: https://demtech.oii.ox.ac.uk/research/posts/challenging-truth-and-trust-a-global-inventory-of-organized-social-media-manipulation/

Google Scholar

Bruch, E., and Feinberg, F. (2017). Decision-making processes in social contexts. Ann. Rev. Sociol. 43, 207–227.

Google Scholar

Bryanov, K., and Vziatysheva, V. (2021). Determinants of individuals' belief in fake news: a scoping review determinants of belief in fake news. PLoS ONE 16, e0253717. doi: 10.1371/journal.pone.0253717

PubMed Abstract | CrossRef Full Text | Google Scholar

Camargo, C. Q., and Simon, F. M. (2022). Mis-and disinformation studies are too big to fail: six suggestions for the field's future. Harvard Kennedy School Misinform. Rev. 3. doi: 10.37016/mr-2020-106

CrossRef Full Text | Google Scholar

Castillo, C., Mendoza, M., and Poblete, B. (2011). Information credibility on twitter. In: Proceedings of the 20th International Conference on World Wide Web. Hyderabad. p. 675–684.

Google Scholar

Cerulo, K. A., Leschziner, V., and Shepherd, H. (2021). Rethinking culture and cognition. Ann. Rev. Sociol. 47, 63–85 doi: 10.1146/annurev-soc-072320-095202

CrossRef Full Text | Google Scholar

Dai, Y., Yu, W., and Shen, F. (2021). The effects of message order and debiasing information in misinformation correction. Int. J. Commun. 15, 1039–1059.

Google Scholar

Dawson, A., and Innes, M. (2019). How Russia's internet research agency built its disinformation campaign. Pol. Quart. 90, 245–256. doi: 10.1111/1467-923X.12690

CrossRef Full Text | Google Scholar

DiMaggio, P. (1997). Culture and cognition. Annu. Rev. Sociol. 23, 263–287. doi: 10.1146/annurev.soc.23.1.263

CrossRef Full Text | Google Scholar

Diviani, N., van den Putte, B., Giani, S., and van Weert, J. C. (2015). Low health literacy and evaluation of online health information: a systematic review of the literature. J. Med. Int. Res. 17, e112. doi: 10.2196/jmir.4018

PubMed Abstract | CrossRef Full Text | Google Scholar

Druckman, J. N. (2022). A framework for the study of persuasion. Annu. Rev. Pol. Sci. 25, 65–88. doi: 10.1146/annurev-polisci-051120-110428

CrossRef Full Text | Google Scholar

Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., et al. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nat. Rev. Psychol. 1, 13–29. doi: 10.1038/s44159-021-00006-y

CrossRef Full Text | Google Scholar

Ferrara, E. (2020). Bots, elections, and social media: a brief overview. In: Shu, K., Wang, S., Lee, D., and Liu, H., editors. Disinformation, Misinformation, and Fake News in Social Media. Lecture Notes in Social Networks. Cham: Springer. p. 95–114.

Google Scholar

Ferrara, E., Varol, O., Davis, C., Menczer, F., and Flammini, A. (2016). The rise of social bots. Commun. ACM 59, 96–104. doi: 10.1145/2818717

CrossRef Full Text | Google Scholar

Forelle, M., Howard, P., Monroy-Hernández, A., and Savage, S. (2015). Political Bots and the Manipulation of Public Opinion in Venezuela. doi: 10.2139/ssrn.2635800

CrossRef Full Text | Google Scholar

Fuchs, C. (2014). Digital prosumption labour on social media in the context of the capitalist regime of time. Time Soc. 23, 97–123. doi: 10.1177/0961463X13502117

CrossRef Full Text | Google Scholar

Fuchs, C. (2018). Propaganda 2.0: Herman and Chomsky's propaganda model in the age of the internet, big data and social media. In: Pedro-Carañana, J., Broudy, D., and Klaehn, J, editors. The Propaganda Model Today: Filtering Perception and Awareness. London: University of Westminster Press. p. 71–92.

Google Scholar

Fuchs, C. (2019). Nationalism on the Internet: Critical Theory and Ideology in the Age of Social Media and Fake News. Abingdon-on-Thames: Routledge.

Google Scholar

Gauld, R., and Williams, S. (2009). Use of the internet for health information: a study of Australians and New Zealanders. Inform. Health Soc. Care. 34, 149–158. doi: 10.1080/17538150903102448

PubMed Abstract | CrossRef Full Text | Google Scholar

Gilovich, T., Griffin, D., and Kahneman, D. editors. (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press.

Google Scholar

Guess, A., and Lyons, B. (2020). Misinformation, disinformation, and online propaganda. In: Persily, N., and Tucker, J.A., editors. Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge: Cambridge University Press. p. 10–33.

Google Scholar

Guess, A., Nagler, J., and Tucker, J. (2019). Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci. Adv. 5, eaau4586. doi: 10.1126/sciadv.aau4586

PubMed Abstract | CrossRef Full Text | Google Scholar

Haim, M., Kümpel, A. S., and Brosius, H. B. (2018). Popularity cues in online media: a review of conceptualizations, operationalizations, and general effects. Stud. Commun. Media 7, 186–207. doi: 10.5771/2192-4007-2018-2-58

CrossRef Full Text | Google Scholar

Hameleers, M., Powell, T. E., Van Der Meer, T. G., and Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Pol. Commun. 37, 281–301. doi: 10.1080/10584609.2019.1674979

CrossRef Full Text | Google Scholar

Howard, P. N., and Kollanyi, B. (2016). Bots, #Strongerin, and #Brexit: Computational Propaganda During the UK-EU Referendum. doi: 10.2139/ssrn.2798311

CrossRef Full Text | Google Scholar

Howard, P. N., Woolley, S., and Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: the challenge of automated political communication for election law and administration. J. Inform. Technol. Pol. 15, 81–93. doi: 10.1080/19331681.2018.1448735

CrossRef Full Text | Google Scholar

Islam, A. N., Laato, S., Talukder, S., and Sutinen, E. (2020). Misinformation sharing and social media fatigue during COVID-19: an affordance and cognitive load perspective. Technol. Forecast. Soc. Change. 159, 120201. doi: 10.1016/j.techfore.2020.120201

PubMed Abstract | CrossRef Full Text | Google Scholar

Jowett, G. S., and O'donnell, V. (2018). Propaganda and Persuasion. Thousand Oaks, CA: Sage publications.

Google Scholar

Jungherr, A., Rivero, G., and Gayo-Avello, D. (2020). Retooling Politics: How Digital Media are Shaping Democracy. Cambridge: Cambridge University Press.

Google Scholar

Kahneman, D. (2011). Thinking, Fast and Slow. New York, NY: Macmillan.

Google Scholar

Kahneman, D., Slovic, S. P., Slovic, P., and Tversky, A. editors. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

PubMed Abstract | Google Scholar

Kirby, M. J. (2016). The City Getting Rich from Fake News. BBC. Available online at: https://www.bbc.com/news/magazine-38168281 (accessed May 20, 2023).

Google Scholar

Kreiss, D. (2021). “Social media and democracy: the state of the field, prospects for reform,” edited by Nathaniel Persily and Joshua A. Tucker. Int. J. Press/Pol. 26, 505–512. doi: 10.1177/1940161220985078

CrossRef Full Text | Google Scholar

Kuo, R., and Marwick, A. (2021). Critical disinformation studies: history, power, and politics. Harvard Kennedy School Misinform. Rev. 2, 1–11. doi: 10.37016/mr-2020-76

CrossRef Full Text | Google Scholar

Lamont, M., Adler, L., Park, B. Y., and Xiang, X. (2017). Bridging cultural sociology and cognitive psychology in three contemporary research programmes. Nat. Hum. Behav. 1, 866–872. doi: 10.1038/s41562-017-0242-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Lareau, A. (2011). Language as a Conduit for Social Life: Harold McAllister. In Unequal Childhoods: Class, Race, and Family Life. 2nd ed. Berkeley, CA: University of California Press. p. 134–160.

Google Scholar

Lewandowsky, S., and van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. Eur. Rev. Soc. Psychol. 32, 348–384 doi: 10.1080/10463283.2021.1876983

PubMed Abstract | CrossRef Full Text | Google Scholar

Liang, F., Zhu, Q., and Li, G. M. (2022). The effects of flagging propaganda sources on news sharing: quasi-experimental evidence from twitter. Int. J. Press/Pol. 1–20. doi: 10.1177/19401612221086905

CrossRef Full Text | Google Scholar

Lizardo, O., Mowry, R., Sepulvado, B., Stoltz, D. S., Taylor, M. A., Van Ness, J., et al. (2016). What are dual process models? Implications for cultural analysis in sociology. Sociol. Theory 34, 287–310. doi: 10.1177/0735275116675900

PubMed Abstract | CrossRef Full Text | Google Scholar

Lombardi Vallauri, E. (2021). Stereotypes favour implicatures and implicatures smuggle stereotypes: the case of propaganda. In: Macagno, F., and Capone, A., editors. Inquiries in Philosophical Pragmatics. Perspectives in Pragmatics, Philosophy and Psychology. Cham: Springer. p. 193–208.

Google Scholar

Lucassen, T., and Schraagen, J. M. (2011). Factual accuracy and trust in information: the role of expertise. J. Am. Soc. Inform. Sci. Technol. 62, 1232–1242. doi: 10.1002/asi.21545

CrossRef Full Text | Google Scholar

Ma, Y., Dixon, G., and Hmielowski, J. D. (2019). Psychological reactance from reading basic facts on climate change: the role of prior views and political identification. Environ. Commun. 13, 71–86 doi: 10.1080/17524032.2018.1548369

CrossRef Full Text | Google Scholar

Marmura, S. M. (2020). Russiagate, WikiLeaks, and the political economy of posttruth news. Int. J. Commun. 14, 5417–5435. Available online at: https://ijoc.org/index.php/ijoc/article/view/14287

Google Scholar

Martin, J. N., and Nakayama, T. K. (1999). Thinking dialectically about culture and communication. Commun. Theory 9, 1–25. doi: 10.1111/j.1468-2885.1999.tb00160.x

CrossRef Full Text | Google Scholar

Marwick, A., and Lewis, R. (2017). Media Manipulation and Disinformation Online. New York, NY: Data and Society Research Institute.

Google Scholar

McCombs, M. E., and Shaw, D. L. (1972). The agenda-setting function of mass media. Public Opin. Quart. 36, 176–187. doi: 10.1086/267990

CrossRef Full Text | Google Scholar

McCombs, M. E., Shaw, D. L., and Weaver, D. H. editors. (1997). Communication and Democracy: Exploring the intellectual Frontiers in Agenda-setting Theory. 1st ed. Abingdon-on-Thames: Routledge.

Google Scholar

McKelvey, F., and Dubois, E. (2017). Computational Propaganda in Canada: The Use of Political Bots. Accessible online at: https://ora.ox.ac.uk/objects/uuid:cb1b7ea7-41ac-4de2-9c05-b35a04049788 (accessed May 17, 2023).

PubMed Abstract | Google Scholar

Metzger, M. J., and Flanagin, A. J. (2013). Credibility and trust of information in online environments: the use of cognitive heuristics. J. Pragmat. 59, 210–220. doi: 10.1016/j.pragma.2013.07.012

CrossRef Full Text | Google Scholar

Metzger, M. J., Flanagin, A. J., and Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. J. Commun. 60, 413–439. doi: 10.1111/j.1460-2466.2010.01488.x

CrossRef Full Text | Google Scholar

Nguỹen, S., Kuo, R., Reddi, M., Li, L., and Moran, R. E. (2022). Studying mis-and disinformation in Asian diasporic communities: the need for critical transnational research beyond Anglocentrism. Harvard Kennedy School Misinform. Rev. 3. doi: 10.37016/mr-2020-95

CrossRef Full Text | Google Scholar

OSoMe. (2023). OSoMe Project, Indiana University Observatory on Social Media. Available online at: https://osome.iu.edu/ (accessed February 15, 2023).

Google Scholar

Oxford Internet Institute (2023). Computational Propaganda Project. Available online at: https://www.oii.ox.ac.uk/research/projects/computational-propaganda/ (accessed February 15, 2023).

Google Scholar

Pennycook, G., Cannon, T. D., and Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. J. Experiment. Psychol. Gen. 147, 1865–1880. doi: 10.1037/xge0000465

PubMed Abstract | CrossRef Full Text | Google Scholar

Pennycook, G., and Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 39–50. doi: 10.1016/j.cognition.2018.06.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Pennycook, G., and Rand, D. G. (2021). The psychology of fake news. Trends Cogn. Sci. 25, 388–402. doi: 10.1016/j.tics.2021.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Pittman, M., and Haley, E. (2023). Cognitive load and social media advertising. J. Interact. Advertis. 23, 33–54. doi: 10.1080/15252019.2022.2144780

CrossRef Full Text | Google Scholar

Rampersad, G., and Althiyabi, T. (2020). Fake news: Acceptance by demographics and culture on social media. Journal of Information Technology and Politics 17, 1–11. doi: 10.1080/19331681.2019.1686676

CrossRef Full Text | Google Scholar

Rauchfleisch, A., and Kaiser, J. (2020). The false positive problem of automatic bot detection in social science research. PLoS ONE 15, e0241045. doi: 10.1371/journal.pone.0241045

PubMed Abstract | CrossRef Full Text | Google Scholar

Rhodes, S. C. (2022). Filter bubbles, echo chambers, and fake news: how social media conditions individuals to be less critical of political misinformation. Pol. Commun. 39, 1–22. doi: 10.1080/10584609.2021.1910887

CrossRef Full Text | Google Scholar

Rosen, E. (2017). Horizontal immobility: How narratives of neighborhood violence shape housing decisions. Am. Sociol. Rev. 82, 270–296. doi: 10.1177/0003122417695841

CrossRef Full Text | Google Scholar

Rosińska, K. A. (2021). Disinformation in Poland: thematic classification based on content analysis of fake news from 2019. Cyberpsychol. J. Psychosoc. Res. Cyberspace 15. doi: 10.5817/CP2021-4-5

CrossRef Full Text | Google Scholar

Rucks-Ahidiana, Z. (2022). Race and the financial toolkit: bridging cultural theories to understand behavior and decision making in the racial wealth gap. Sociol. Inquiry 92, 388–416. doi: 10.1111/soin.12468

CrossRef Full Text | Google Scholar

Samovar, L. A., Porter, R. E., McDaniel, E. R., and Roy, C. S. (2016). Communication Between Cultures. Boston, MA: Cengage Learning.

Google Scholar

Sanderson, J. A., Bowden, V., Swire-Thompson, B., Lewandowsky, S., and Ecker, U. K. H. (2022). Listening to misinformation while driving: cognitive load and the effectiveness of (repeated) corrections. J. Appl. Res. Memory Cogn. doi: 10.1037/mac0000057

CrossRef Full Text | Google Scholar

Saurwein, F., and Spencer-Smith, C. (2020). Combating disinformation on social media: Multilevel governance and distributed accountability in Europe. Digit. Journal. 8, 820–841. doi: 10.1080/21670811.2020.1765401

CrossRef Full Text | Google Scholar

Schaewitz, L., Kluck, J. P., Klösters, L., and Krämer, N. C. (2020). When is disinformation (in) credible? Experimental findings on message characteristics and individual differences. Mass Commun. Soc. 23, 484–509. doi: 10.1080/15205436.2020.1716983

CrossRef Full Text | Google Scholar

Servaes, J. (1989). Cultural identity and modes of communication. Annal. Int. Commun. Assoc. 12, 283–416. doi: 10.1080/23808985.1989.11678728

CrossRef Full Text | Google Scholar

Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., and Menczer, F. (2017). The spread of fake news by social bots. arXiv. doi: 10.48550/arXiv.1707.07592

CrossRef Full Text | Google Scholar

Siegel, A. (2020). Online Hate Speech. in Social Media and Democracy: The State of the Field, Prospects for Reform. Persily, N., and Tucker, J.A., editors. Cambridge: Cambridge University Press. p. 56–88.

Google Scholar

Song, H., So, J., Shim, M., Kim, J., Kim, E., and Lee, K. (2023). What message features influence the intention to share misinformation about COVID-19 on social media? The role of efficacy and novelty. Comput. Hum. Behav. 138, 107439. doi: 10.1016/j.chb.2022.107439

PubMed Abstract | CrossRef Full Text | Google Scholar

Stanley, M. L., Henne, P., Yang, B. W., and De Brigard, F. (2020). Resistance to position change, motivated reasoning, and polarization. Pol. Behav. 42, 891–913 doi: 10.1007/s11109-019-09526-z

CrossRef Full Text | Google Scholar

Subramanian, S. (2017). Inside the Macedonian Fake-news Complex. Wired. Available online at: https://www.wired.com/2017/02/veles-macedonia-fake-news/ (accessed May 20, 2023).

Google Scholar

Sundar, S. (2008). The MAIN model: a heuristic approach to understanding technology effects on credibility. In: Metzger, M.J., and Flanagin, A.J., editors. Digital Media, Youth, and Credibility. Cambridge, MA: MIT Press. p. 73–100.

Google Scholar

Sya'bandari, Y., Meilani-Fadillah, S., Nurlaelasari-Rusmana, A., Qurota-Aini, R., and Ha, M. (2022). Assessing cognitive bias in Korean and Indonesian scientists: considering sociocultural factors in judgment and choice. Asia-Pacific Sci. Educ. 8, 222–255. doi: 10.1163/23641177-bja10045

CrossRef Full Text | Google Scholar

Tate, M. A. (2009). Web Wisdom: How to Evaluate and Create Information Quality on the Web. Boca Raton, FL: CRC Press.

Google Scholar

Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., et al. (2018). Social media, political polarization, and political disinformation: a review of the scientific literature. SSRN Elect. J. 1–95. doi: 10.2139/ssrn.3144139

CrossRef Full Text | Google Scholar

Vaisey, S. (2009). Motivation and justification: a dual-process model of culture in action. Am. J. Sociol. 114, 1675–1715. doi: 10.1086/597179

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Bavel, J. J., and Pereira, A. (2018). The partisan brain: An identity-based model of political belief. Trends Cogn. Sci. 22, 213–224. doi: 10.1016/j.tics.2018.01.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Vosoughi, S., Roy, D., and Aral, S. (2018). The spread of true and false news online. Science 359, 1146–1151 doi: 10.1126/science.aap9559

PubMed Abstract | CrossRef Full Text | Google Scholar

Wanless, A., and Berk, M. (2021). Participatory propaganda: The engagement of audiences in the spread of persuasive communications. In: ed: Herbert, D., and Fisher-Høyrem, S., editors. Social Media and Social Order. Warsaw: De Gruyter Open Poland. p. 111–139.

Google Scholar

Wittenberg, C., and Berinsky, A. (2020). Misinformation and its correction. In: Persily, N., and Tucker, J., editors. Social Media and Democracy: The State of the Field amd Prospects for Reform. Cambridge: Cambridge University Press. p. 163–198.

Google Scholar

Woolley, S. C., and Howard, P. (2017). Computational Propaganda Worldwide: Executive Summary. Available online at: https://demtech.oii.ox.ac.uk/research/posts/computational-propaganda-worldwide-executive-summary/ (accessed February 12, 2023).

Google Scholar

Woolley, S. C., and Howard, P. N. editors. (2018). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford: Oxford University Press.

Google Scholar

Woolley, S. C., and Howard, P. N. (2016). Political communication, computational propaganda, and autonomous agents: introduction. Int. J. Commun. 10, 4882–4890. Available online at: https://ijoc.org/index.php/ijoc/article/download/6298/1809

Google Scholar

Keywords: online propaganda, social media, persuasion, information processing, culture, cognition

Citation: Nerino V (2023) Overcome the fragmentation in online propaganda literature: the role of cultural and cognitive sociology. Front. Sociol. 8:1170447. doi: 10.3389/fsoc.2023.1170447

Received: 20 February 2023; Accepted: 19 June 2023;
Published: 11 July 2023.

Edited by:

Gabriella Punziano, University of Naples Federico II, Italy

Reviewed by:

Cristina Monzer, Norwegian University of Science and Technology, Norway
Fabio Gaspani, University of Milano-Bicocca, Italy

Copyright © 2023 Nerino. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Valentina Nerino, valentina.nerino@unibe.ch; valentina.nerino@unitn.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.