- 1Department of Communication, University of Muenster, Muenster, Germany
- 2Junior Research Group, Communicating Scientists: Challenges, Competencies, Contexts (fourC), Technische Universität Braunschweig, Braunschweig, Germany
- 3Department of Communication and Media Research, University of Zurich, Zurich, Switzerland
- 4Department of Psychology, University of Muenster, Muenster, Germany
Scientists (and science as a whole) provide evidence and advice for societal problem solving and collective decision-making. For this advice to be heard, the public must be willing to trust science, where “trust” means that one can confidently expect science to provide reliable knowledge and evidence, even if one’s understanding of science is bounded. According to the sociological and psychological literature, citizens’ basic attitudes toward, experiences with, and perceived trustworthiness of the trustee serve as antecedents of trust. From this, we developed a model for the public’s trust in science, and we tested this model in a nationally representative survey in Switzerland (N = 1,050). The analysis reveals that trust in science was best predicted by positivistic attitudes toward science (β = 0.33) and to a lesser extent by trustworthiness assessments of scientists (β = 0.24). Experiences with science did not predict trust in science (β = 0.07). These results suggest that stable basic attitudes toward science and its role in society are grounds on which trust in science can be built.
Introduction
While today’s societies largely rest on innovations and technologies that have been developed through scientific advances, they also face new and threatening challenges such as climate change and pandemics. For such issues to be understood and potentially solved, societies must rely on and trust scientific knowledge, alongside social, economic, and political knowledge. That is, trust in science and scientists is required for the functioning of modern democracies. The importance of trust in science for societal well-being became particularly evident during the COVID-19 pandemic, as studies showed that acceptance of protective measures depended on trust in science (Dohle, Wingen, and Schreiber 2020; Battiston, Kashyap, and Rotondi 2021).
Most of us lack a deep understanding not only of scientific concepts or theories but also of how scientific knowledge is produced. Hence, due to the division of labor in our societies, we must rely on scientists’ knowledge when using science-based technologies and making personal or civic decisions on issues that involve scientific knowledge. Citizens–who often encounter science via communication in legacy and social media (European Commission 2021)—especially need to trust science despite their bounded understanding of science and about science (Bromme and Goldman 2014; Keren 2018). Science communication plays a fundamental role in cultivating trust in science among citizens by providing scientific information about results as well as about scientific methods and norms (Schäfer 2016). As such, science communication provides the public with opportunities to form attitudes about the trustworthiness of scientists and the credibility of scientific information–both of which can be conceived as key determinants of trust in science (Fiske and Dupree 2014).
In our study, we build on literature on trust and science communication to develop a theoretical model of trust in science. The model conceptualizes experiences with science, trustworthiness of scientists and basic orientations toward science as antecedents of trust in science. The selection of these antecedents allows us to test different psychological and sociological approaches to explain trust in science. In the empirical part, we test the model using data from a nationally representative survey in Switzerland to gain an understanding of the relative impact of the predictors. By theoretically embedding and empirically testing basic orientations toward science and their relationship with trust in science, our study contributes to a more nuanced understanding of how to conceptualize long-term attitudes toward the performance of the system of science, and their role for building trust in science.
Trust in Science
In various disciplines including sociology, political sciences, and psychology, researchers agree that trust is an anticipatory mental state in which positive expectations are held about the behavior and intentions of another person (or person as role holder), institution, or system, allowing one to rely on others despite a certain vulnerability or risk (Rousseau et al., 1998; Schäfer 2016; Blöbaum 2021). Because of their bounded understanding of science, citizens inevitably must trust in science (or scientists as representatives of that system), even though this might be risky: Scientific knowledge entails uncertainty, and scientists might not always speak (or know) the “truth”. However, trust is neither irrational nor blind, as it requires epistemic vigilance (Sperber et al., 2010), which may manifest in spontaneously formed or stable expectations toward science and scientists (Gierth and Bromme 2020). We argue that these expectations address both science’s epistemic and social function in society, and that such expectations are generated through 1) judgements of scientists’ trustworthiness, 2) long-term orientations toward science, and 3) experiences with science and scientific information. Trustworthiness judgements refer to an assessment of the representatives of the system, long-term orientations refer to basic attitudes toward science as a system, and experiences with science refer to mediated or direct contact with science or scientific information.
Judgments of trustworthiness are central in most psychological trust definitions, but especially in rational choice approaches (Coleman 1994; Hardin 2004), most of which describe trust as a simple reflection of the trustee’s trustworthiness (Kee and Knox 1970; Sztompka 2000). In approaches focusing on trust in systems, trustworthiness plays a subordinate role. Nevertheless, following Giddens (1990), we argue that judgments of scientists as representatives of the system also play a role for trust in science as a system. Representatives of systems serve as access points at which citizens can get in contact with the system and build expectations that are reflected in their trust in science. We will therefore consider epistemic trustworthiness of scientists as one predictor of public trust in science.
Research on trust in systems described long-term orientations toward the system of interest as a further antecedent of trust, such as satisfaction with democracy (Zmerli, Newton, and Montero 2007). Accordingly, in the context of trust in science, we refer to antecedents of trust as one’s basic orientations toward science (Brossard and Nisbet 2007). People hold expectations toward science’s role in society (Priest, Bonfadelli, and Rusanen 2003; Gauchat 2011), which are often shaped by people’s worldviews, socialization, and past experiences (Gauchat 2011; Howell et al., 2020). As we will detail below, these orientations reflect both positivistic attitudes toward science (e.g., beliefs in its problem-solving capacity) and critical attitudes toward it (such as populist views criticizing the knowledge and power of allegedly immoral academic elites; Mede and Schäfer 2020).
Finally, trust research considers experiences with the trustee as central for both interpersonal judgements of trustworthiness (e.g. Mayer, Davis, and Schoorman 1995) and trust in systems (Giddens 1990). Following this, the frequency of experiences with science, often enabled through science communication (Schäfer 2016), may predict how people form expectations toward science and scientists and, thus, may also be important for generating trust. Therefore, we examine how experiences with science in online and offline contexts contributes to citizens’ evaluations of scientists’ trustworthiness and their trust in science in general.
In sum, we define trust in science as one’s willingness to rely on science and scientists (as representatives of the system) despite having a bounded understanding of science and the risk of not getting to the “truth” (that is, they accept dependency despite vulnerability and risk). Thereby, individuals’ trust in science is based on the following expectations: 1) that scientists make epistemically warranted claims and 2) that science provides benefits for society. As we point out below, these two sets of expectations may directly manifest in epistemic trustworthiness ascriptions to scientists, but they may also produce, on the more general level, either affirmative perceptions (“positivistic beliefs”) or critical views of science (“science-related populist attitudes”). We further argue that these expectations are partly formed through experiences with science. Therefore, in this paper, we develop theoretical arguments for a model of trust in science and its predictors (Figure 1). In the following, we describe the theoretical reasoning behind this model and we provide a comprehensive empirical analysis of the model using a representative sample from Switzerland.
Judgments of Scientists’ Epistemic Trustworthiness
In the context of science, judgments of trustworthiness reflect deliberate or spontaneous cognitive processes that help decide whether to accept a scientist’s knowledge claims as true, hence, they are directed at a scientists’ epistemic trustworthiness (Origgi 2004). Epistemic trustworthiness judgments are based on inferences from cues available and relevant in a certain situation (Landrum, Eaves, and Shafto 2015); such as an author’s field of study, affiliation, inferred communicative intentions, or language use in science communication (Hendriks and Kienhues 2019 for a review). Three dimensions have been found to be important (Hendriks, Kienhues, and Bromme 2015): To be ascribed epistemic trustworthiness, scientists should possess expertise on the topic of interest; integrity, that is, adhere to the rules and conventions of the scientific endeavor; and benevolence, that is, behave according to commonly agreed upon moral or social values. Science as an epistemic endeavor aims to produce claims about the (natural) world that have been held against scientific scrutiny and can, thus, be assumed to be “true” for the time being (Hendriks, Kienhues, and Bromme 2015). Thus, scientists are expected to possess pertinent and extensive knowledge about a problem to guarantee for the veracity of knowledge claims; they are expected to follow up on this using reliable scientific methods up to current standards, and finally, they are expected to keep in mind the interests of others when producing knowledge, that is, contribute to science’s social role in society. As such, evaluations of scientists’ trustworthiness correspond closely to the epistemic requirements that warrant (public) trust in science, as explained above. Empirically, studies on trust in systems usually do not differentiate between trust in the system and evaluations of the trustworthiness of representatives of the system, nor do they consider the relationship of the two constructs. We add this aspect to the literature and assume that:
H1: Judgments of scientists’ trustworthiness positively predict trust in science.
Basic Orientations Toward Science as Antecedents of Perceived Trustworthiness of Scientists and Trust in Science
In this paper, we focus on two–somewhat antagonistic–attitudinal constructs that reflect people’s default stance when engaging with science: positivistic attitudes about science and science-related populist attitudes. Both of these basic orientations toward science can be perceived as rather persistent dispositions acquired throughout socialization and education and may thus represent “a relatively stable tendency among citizens” (Brossard and Nisbet 2007, 30), similar to pro-technology cultural orientations (Kahan et al., 2009), deference to the cultural authority of science (Gauchat 2011), technocratic attitudes (Bertsou and Pastorella 2017), or political populist attitudes (Pruysers 2021). Accordingly, both positivistic and science-related populist attitudes can be understood as latent drivers of more volatile and adaptable views of science, such as those that we examine in this study, i.e., judgments of scientists’ epistemic trustworthiness. We therefore hypothesize that basic orientations toward science influence 1) judgments of scientists’ trustworthiness and 2) trust in science.
Positivistic Attitudes Toward Science
Attitudes about science have long been investigated in research on the public understanding of science, especially with a focus on long-term developments and differences between countries (Bauer, Petkova, and Boyadjieva 2000; Bauer, Allum, and Miller 2007; Castell et al., 2014; National Science Board 2018; European Commission 2021), and have been found to predict trust in science (Roberts et al., 2013). In contrast to these general attitudes, the term “deference to scientific authority” is often used to describe a value disposition or stable worldview directed at the cultural authority of science (Brossard and Nisbet 2007; Akin et al., 2021). In short, deference to scientific authority encompasses the belief that science can deliver “true facts” about the (natural) world (Brossard and Nisbet 2007; Howell et al., 2020). That is, while attitudes toward science and trust in science have been found to vary in accordance with the topic or specific actors or institutions (possibly because of topic-specific attitudes or situation-specific judgements of trustworthiness), deference to scientific authority may reflect a more general belief system that is rather stable (Howell et al., 2020). This notion fits well with accounts of trust that include anticipatory positive emotions or confidence (Engdahl and Lidskog 2014; Cummings 2020). In fact, deference to scientific authority has been positively linked to evaluations of scientists as sources of information (A. A. Anderson et al., 2012; Howell et al., 2020).
In consequence of these findings, it is reasonable to assume that trust in science (as a system) should also reflect the trustor’s general confidence in science as a way of knowing. However, the notion of epistemic trust as outlined above does not imply that this trust insinuates blind acceptance of scientific authority (even though these two concepts might be related; Howell et al., 2020); rather, positivistic beliefs should generally shift judgments of scientists’ trustworthiness and trust in science to the positive. Thus, to avoid conceptual confusion (and to reflect that we used items that could be assigned both to general attitudes toward science as well as to the belief in and the deference to the authority of science), we refer to positivistic attitudes toward science to reflect relatively stable attitudes pertaining to individual views of science as a unique contributor of knowledge about the (natural) world. Even though evidence is scarce, and measurements of both concepts vary (Gauchat 2011; Xiao 2013; Besley 2018; Howell et al., 2020), positivistic attitudes toward science seem to be related to trust in science. From a theoretical standpoint, positivistic attitudes toward science should have an overall positive effect on trust in science and trustworthiness attributions to scientists. Therefore, we test their capacity to predict people’s trust in science as well as the trustworthiness of scientists.
H2: Positivistic attitudes toward science positively predict trust in science.
H3: Positivistic attitudes toward science positively predict judgements of scientists’ trustworthiness.
Science-Related Populism
Citizens may not only hold favorable but also critical attitudes toward science and its merits for society. One distinct variant of these sentiments has been conceptualized as “science-related populism,” which describes a thin-centered ideology that suggests a fundamental antagonism between the “ordinary people” and “academic elites,” i.e., scientists, experts, and scientific institutions (Mede and Schäfer 2020). Science-related populists assert that academic elites produce knowledge that is useless because it allegedly employs unreliable methods; that the academic elites are ideologically biased because they allegedly follow multiculturalist, environmentalist, or other political agendas; and that the academic elites allegedly ignore the needs of ordinary people, such that their findings lack practical relevance and fail to inform people’s daily life decisions and offer solutions to societal problems (Saurette and Gunster 2011; Forchtner, Kroneder, and Wetzel 2018; Krämer and Klingler 2020). Accordingly, science-related populism is at odds with key components of trust in science: While science-related populists deny views suggesting that scientists are knowledgeable experts and act in the interest of the general population, trust is strongly dependent on these beliefs. Similarly, science-related populism is also inherently different from positivistic attitudes toward science: The former challenges scientists’ knowledge and power claims, portraying scholars as elitist and incompetent–whereas the latter support these claims, emphasizing the societal merits of science and demanding it to have unrestricted authority (Mede and Schäfer 2020).
Individuals who endorse science-related populism can be described as holding “science-related populist attitudes” (Mede, Schäfer, and Füchslin 2021). Such attitudes have been conceptualized as a multidimensional construct (similar to political populist attitudes; Schulz et al., 2018), with each dimension reflecting one of the four theoretical components of science-related populism, i.e., (positive) conceptions of the ordinary people, (negative) conceptions of the academic elite, demands for science-related decision-making sovereignty, and demands for truth-speaking sovereignty (Mede, Schäfer, and Füchslin 2021). Importantly, diagnosing science-related populism requires that all these four components occur simultaneously within a person: Individuals who only hold negative conceptions of academic elites, for example, but reject the three other components, may only be described as supporters of anti-academic views but not as proponents of science-related populism. This premise has been described as non-compensatoriness (Wuttke, Schimpf, and Schoen 2020).
So far, empirical research on science-related populist attitudes among the general public and how they relate to trust in science is scarce. Some isolated aspects of science-related attitudes and related phenomena have been investigated in several survey and experimental studies; these studies suggest, for example, that trust in societal institutions is lower among people who hold populist attitudes (Geurkink et al., 2020), support anti-elite views (Silva et al., 2017), and endorse conspiracy theories (Miller, Saunders, and Farhart 2016). Similarly, trust in universities and university researchers tends to be lower among voters of populist parties in countries like Finland (Saarinen, Koivula, and Keipi 2020) and Israel (Filc and Lebel 2005). Further research suggests that endorsement of pseudoscientific beliefs is associated with lower perceived trustworthiness of scientists (Fasce and Picó 2019), that disapproval of the scientific method is linked with lower public trust in science (Gauchat 2011), and that people who reject scientific authority have less trust in scientists to tell them the truth about the risks and benefits of nanotechnology (Anderson et al., 2012). Findings like these are in line with results from Mede et al. (2021), which indicate that science-related populist attitudes are negatively correlated with trust in science, trust in university scientists–and, positivistic attitudes toward science suggesting that it makes people’s lives better. Accordingly, we hypothesize the following:
H4: Science-related populism negatively predicts trust in science.
H5: Science-related populism negatively predicts judgements of scientists’ trustworthiness.
Experiences With Science as an Antecedent of Scientists’ Trustworthiness
In science communication research, the idea of engaging with science is key for models such as the public engagement model and the conversation model (Trench 2008), which propose that science communication is more multi-directional rather than a linear process and emphasize the role of citizens. Research on public engagement with science has largely focused on how people get in contact with science (e.g. Wissenschaft im Dialog, 2018), how people contribute to science (Hargittai, Füchslin, and Schäfer 2018), and how they process scientific information (Bruine de Bruin and Bostrom 2013). Looking at the effect of engaging with science, we argue that personal and mediated communicative experiences with science in online and offline channels contribute to building trust in science by predicting judgements of scientists’ trustworthiness (also Akin and Scheufele 2017).
Our argument is based on the idea put forth by Giddens (1990) of “facework connections” (p.83), suggesting that communication with representatives of a system is an important factor for building trust in systems. In general, science is perceived by most citizens as being disembedded from everyday life. Thus, instances of science communication and outreach may help to re-embed science and to bring citizens and science closer together through direct or mediated communication with science and scientists (Reif 2021). Based on this argument, communication with individuals serve as access points, which potentially influence peoples’ trust in the system and its representatives.
The relationship between experiences and trustworthiness is long established in the literature. Following Sztompka (2000), past experiences are the main foundation of trustworthiness, and other popular trust models centrally locate the role of experiences (Zucker 1986; Lewicki and Bunker 1995; Mayer, Davis, and Schoorman 1995; Endreß 2002). Some authors argue that the channel through which people communicate with trustees is important, and they further argue that social presence is an indispensable prerequisite of trustworthiness (Cyr et al., 2007). Experiences with the trustee are often obtained indirectly through reported experiences of others or by imputation from outcomes of prior direct or mediated exchange (Zucker 1986).
Science communication and outreach are therefore central for people’s experiences with science and scientists. Experience with science can be differentiated by their online- or offline-occurrence, whether they reflect mass-mediated or non-mass-mediated communication, and along their level of interaction. Perceiving information about science in media coverage (e.g., in newspapers, TV, or radio) or online (e.g., via social networks, blogs, YouTube, or Wikipedia) represents the lowest level of interaction. The level of interaction rises if the situation for communicating with science allows active engagement. Traditionally, people experience science interactively by visiting museums, zoos, or talking with friends. In recent years, these possibilities have been supplemented by online channels, which are increasingly used for finding out about science and to interact with science (Brossard 2013; Su et al., 2015; National Science Board, 2018). In the online context, the interactive features of social media allow people to communicate with scientists directly; Reif et al. (2020) argue that this possibility can positively influence people’s trust in scientists. Huber et al. (2019) argue that social media increases trust in science because it offers the possibility to interact with sources of similar mindsets that the individual may trust. In the offline context, meeting scientists in person at events or discussing science with friends represent forms with high interactivity.
Nonetheless, some evidence also suggests that non-interactive experiences with scientific information contribute to how people perceive scientists and judge their trustworthiness. Su et al. (2015) showed that people who prefer online-only information sources have higher scientific knowledge, which was similarly shown by Dudo et al. (2011), who further found a positive relation between Internet use and attitudes toward science. For offline forms such as scientific information obtained through newspapers or magazines, empirical evidence on their influence on judgements of trustworthiness is relatively limited (for exceptions see Hmielowski et al., 2014), but findings nonetheless suggest that news media could play a role for evaluating scientists’ trustworthiness. In sum, we argue that all interactions with science (online or offline, interactive or non-interactive) should positively predict trust in science and the trustworthiness judgments of scientists.
H6: Communicative experiences with science positively predict trust in science.
H7: Communicative experiences with science positively predict judgements of scientists’ trustworthiness.
Method and Measurement
Data
To test our hypotheses, we relied on the Science Barometer Switzerland survey, a nationally representative population survey in Switzerland, which was fielded from 17 June to July 20, 2019. The sample contained 1,050 respondents (age: M = 48.3, SD = 17.3; 53.5% female; 47.8% attained tertiary education). Respondents were recruited via public telephone listings (81%, age and gender quotas) and random digit dialing (19%, no quota) and surveyed in computer-assisted telephone interviews led by professional interviewers of the Swiss polling company Demoscope. 2.6% of all calls resulted in completed interviews, whereas 21.7% were answered but interview requests were denied, and 75.7% were not picked up in the first place or reached a dead number.
Measurements
To measure trust in science, we used a single-item question (Supplementary Appendix SA2 for all item wordings and scales used), asking “On a scale of 1–5, where 1 means “very low” and 5 means “very high”, how high would you say is your trust in science in general?” Single-item measures like these have been employed and tested in various large-scale survey studies on public perceptions of science and research (e.g. National Science Board 2018) and have been shown to correlate strongly with multi-item measures of trust in science (Supplementary Appendix SA1 in Wellcome Trust 2019).
To measure epistemic trustworthiness perceptions, we used the Muenster Epistemic Trustworthiness Inventory (METI), a reliable and well-established semantic differential scale for measuring individuals’ trustworthiness evaluations of scientists in opinion surveys and experiments (Hendriks, Kienhues, and Bromme 2015). Due to questionnaire length restrictions, we relied on a shortened nine-item version. To construct this shortened METI, we ran a confirmatory factor analysis on the original data from Hendriks et al. (2015) and selected the three best-performing items for each of the three trustworthiness dimensions (χ2 = 66.539, df = 24, p = 0.004; CFI = 0.971, TLI = 0.957, RMSEA = 0.072, SRMR = 0.056; all factor loadings p < 0.001). Our nine-item METI can thus be considered a parsimonious yet reliable survey scale to measure epistemic trustworthiness on three dimensions. For the analyses, we computed unweighted mean values of each of the three METI dimensions, with higher values indicating greater perceived expertise, integrity, and benevolence, respectively.
Positivistic attitudes toward science were measured with five items tapping into different aspects of optimistic views on science and its merits for individuals and society (von Roten 2009; Pardo and Calvo 2002; Prpić 2011). These items have been tested and validated in a range of large-scale survey studies in several countries and are well-established measures in survey research on science communication and public perceptions of science and research (e.g. European Commission 2010; National Science Board 2018). To examine if the items form one stable factor, we performed an exploratory factor analysis using oblimin rotation (Kaiser-Mayer Olkin = 0.75, Bartletts’ χ2 [4] = 159.6, p < 0.001) with data from a previous wave of the Science Barometer Switzerland. The analysis resulted in one factor with an eigenvalue of 2.17, explaining 43% of the variance. Hence, we used a mean score of all five items in our analyses, with higher values representing more positivistic attitudes toward science (Cronbach’s Alpha = 0.70).
Science-related populist attitudes were measured with the SciPop Scale, a robust eight-item survey scale, which quantifies science-related populist attitudes along its four conceptual components and has been tested and validated in different languages and samples in Switzerland (Mede, Schäfer, and Füchslin 2021; Mede and Schäfer 2021). The components were captured by four two-item subscales (Spearman-Brown reliability coefficients: 0.75, 0.78, 0.73, 0.78, respectively) following Mede, Schäfer, and Füchslin (2021). For our hypothesis tests (H1-H5), we used a single aggregate score to quantify science-related populist attitudes, which we obtained by taking the smallest value of the four subscale means (Mede, Schäfer, and Füchslin 2021). This approach accounts for the conceptual premise that science-related populism relies on the concurrent presence of all four dimensions (whereas other approaches, such as averaging all eight item values, would produce high populism scores for respondents who endorse some dimensions fully but reject others completely; Wuttke et al., 2020).
Communicative experiences with science: We asked respondents about their mass-mediated and non-mass-mediated experiences with science.
Exploratory analyses of the mass-mediated experiences with science using survey data from a previous wave of the Science Barometer Switzerland corroborated what conceptual considerations had suggested, namely that differentiated investigations of people’s experiences with science and research necessitate distinguishing between low-interactive and high-interactive experiences. An exploratory factor analysis using oblimin rotation (Kaiser-Mayer Olkin = 0.77, Bartletts’ χ2 [11] = 182.96, p < 0.001) resulted in three factors with an eigenvalue of 5.41, explaining 49% of the variance1. The factors differentiate the level of activity needed to encounter scientific information (high scores indicate more frequent experiences). Factor 1 included items asking respondents for their use of social networks, YouTube, and blogs for getting scientific information (α = 0.62). The second factor included using magazines, scientific websites, and websites of newspapers and magazines for getting scientific information (α = 0.55). The third factor included items asking the respondents for their use of radio2, online media libraries, and television for getting scientific information (α = 0.52).
Non-mass-mediated experiences with science were measured with items asking respondents about ways to engage with science in person. An exploratory factor analysis using oblimin rotation (Kaiser-Mayer Olkin = 0.62, Bartletts’ χ2 [6] = 593.21, p < 0.001) resulted in one factor with an eigenvalue of 1.93, explaining 48% of the variance: The factor included visiting events and museums, talking to peers, visiting zoos, and using messengers to talk about science (α = 0.66, high scores indicate more frequent experiences).
Controls: Existing research indicates that people’s epistemic trustworthiness perceptions, positivistic attitudes toward science, science-related populist attitudes, and (mass-mediated) experiences with science may depend on sociodemographic characteristics and other attitudes toward science (Nisbet et al., 2002; Metag 2020; Besley, Lee, and Pressgrove 2021). Therefore, we controlled for age (continuous), gender (dichotomous), formal education (categorical), interest in science, attention to media coverage on science and research, scientific literacy, and proximity to science measuring personal acquaintances with scientists.
Analysis Strategy
We followed the two-step approach recommended by Anderson and Gerbing (1988). First, to test for convergent and discriminant validity of the constructs, we performed a confirmatory factor analysis (CFA). Second, we tested the structural model based on the measurement model identified in the first stage. This approach makes it possible to determine if misspecifications in the final model are grounded in the measurement of the constructs or the theoretical specifications. In the second stage, we tested not only our theoretical model but also the next-best constrained and the next-best unconstrained model. Comparing these three models makes it possible to judge if adding or deleting paths changes the model fit.
All models were estimated using structural equation modeling (Supplementary Appendix A1 for means, standard deviations, and intercorrelations of the variables). We relied on the lavaan R-package version 0.6–7, (Rosseel 2012) using maximum likelihood estimation with robust standard errors and a mean- and variance-adjusted test statistic (MLMVS) as the estimator. We used this estimator because of high levels of kurtosis and moderate levels of skewness for several variables in our sample. Missing data were imputed using case-wise maximum likewise estimation because the overall percentage of missing values was under the threshold of 5% (Kline 1998). We used item parceling on the level of dimensions to achieve a parsimonious model (Bandalos 2002). Experiences with science and perceived trustworthiness of scientists were included as second-order constructs measured by the dimensions indicated above. Positivistic attitudes and science-related populist attitudes were included as mean indices based on the description in the methods section.
The model fit was assessed using the probability of the mean- and variance-adjusted chi-square value (p ≥ 0.05), root-mean square error of approximation (RMSEA ≤0.06), Tucker Lewis index (TLI ≥0.95), standardized root-mean square residual (SRMR <0.08), and the comparative fit index (CFI ≥0.9) as recommended by Hu and Bentler (1999).
Results
Measurement Model
The CFA of the measurement model showed an overall good model fit and met the limits proposed by Hu and Bentler (1999) for three fit indices (χ2 p < 0.001, CFI = 0.95, RMSEA = 0.06, TLI = 0.92, SRMR = 0.04). An analysis of the correlation matrix (Supplementary Appendix A1) showed initial evidence for the convergent and discriminant validity of the constructs3. To formally evaluate the discriminant validity, we followed the procedure recommended by Anderson and Gerbing (1988) and used chi-square difference tests for constrained and unconstrained models4. We run this test for every possible pairing of constructs in our study (Supplementary Appendix A4). All unconstrained models returned a significantly lower chi-square value indicating a good discriminant validity of our constructs.
Structural Model
A structural model (Figure 2) based on the theoretical model (Figure 1) with the control variables included on the level of the dimensions of the constructs showed an overall good goodness of fit and met the limits proposed by Hu and Bentler (1999) for three fit indices (χ2 p < 0.001, CFI = 0.99, RMSEA = 0.04, TLI = 0.94, SRMR = 0.025). Following the recommendations of Anderson and Gerbing (1988), we compared the theoretical model with a null model, a saturated model, and the next most likely constrained and unconstrained alternative models from a theoretical perspective6. All models performed significantly worse or not significantly better than the theoretical model (Supplementary Appendix A5). Therefore, we conclude that our model fits the data and is suitable for analyzing our hypotheses.
FIGURE 2. Structural equation model. Notes: scipop = science-related populism; trustworth = trustworthiness of scientists; pos = positivistic attitudes toward science; exp = communicative experiences with science; non-med = non-mass-mediated experiences; med = mass-mediated experiences; hiInt = highly interactive experiences; meInt = medium interactive experiences; loInt = low interactive experiences.
Looking at the hypothesized effects (Figure 3), we can already reject H6 and H7. Experiences with science did not significantly predict the perceived trustworthiness of scientists (β = 0.01, p = 0.90) and trust in science (β = 0.07, p = 0.11). Additionally, all control variables did not contribute relevant explanatory power for trust in science, however, they influenced other constructs in the model significantly7.
FIGURE 3. Structural Model (N = 742); path coefficients are standardized estimates; **p < 0.01; ***p < 0.001.
Focusing on the hypotheses referring to trust in science as the dependent variable, H1, H2, and H4 were confirmed according to our data. The perceived trustworthiness of scientists positively predicted trust in science (β = 0.24, p < 0.001), as did positivistic attitudes (β = 0.33, p < 0.001). Science-related populist attitudes negatively predicted trust in science (β = −0.11, p < 0.01). Positivistic attitudes toward science were by far the best predictor of trust in science. In addition to the direct effect, science-related populism and positivistic attitudes also influenced trust in science indirectly by contributing to the perceived trustworthiness of scientists (Table 1).
H3 and H5 were also supported, indicating that positivistic attitudes positively (β = 0.29, p < 0.001) and populist attitudes negatively (β = −0.13, p < 0.01) predict the perceived trustworthiness of scientists.
Discussion
In this study, we tested a theoretical model of trust in science (Figure 1) and found that positivistic attitudes are the strongest predictor of the trustworthiness of scientists and trust in science, while science-related populist attitudes negatively predict both to a lesser degree. Communicative experiences with science (mass-mediated and non-mass-mediated combined) neither predicted judgments of the epistemic trustworthiness of scientists, nor trust in science. Surprisingly, the epistemic trustworthiness ascribed to scientists as representatives of science as a system is not a very strong predictor of trust in science.
Psychological conceptualizations, especially rational choice-based concepts on interpersonal trust put the perceived trustworthiness of the trustee as the central influencing variable; some authors even speak of trust as “reflected trustworthiness” (Sztompka, 2000). Overall, the consensus is that trustworthiness is the most important predictor of trust and can be divided into dimensions, of which expertise, benevolence, and integrity are the most popular. Sociological conceptualizations of trust in systems place prior experience at the center of their considerations and frame trust as an expectation based on accumulated experience with the trusted party. In this context, individuals serve as access points to systems, which in turn are assessed against the dimensions of trustworthiness mentioned above and contribute to one’s overall impression of the system. In summary, and applying these arguments to the topical context of science, prior experiences with scientists as representatives of the system and the assessment of their trustworthiness should significantly determine trust in science.
Our study takes these considerations as a starting point and goes beyond them, asking how basic orientations toward the performance of the system of science influence trust in science. This focus on orientations toward the trustee has been largely omitted from previous research, whereby such research has generally shown that trust in science is linked to a general willingness to defer to scientific knowledge. Similarly, our study found that basic orientations toward science are the strongest predictor of trust in science. However, our results unravel this relationship: Especially positivistic attitudes positively predict trust in science, but science-related populist attitudes negatively predict trust in science. Our hypothesis, mainly derived from psychological trust research, that perceived trustworthiness of scientists positively influences trust in science was confirmed, but the effect size is comparatively small. Therefore, interpreting trust as an expression of the trustworthiness of representatives of the system would be an inappropriate oversimplification.
Contrary to sociological conceptualizations of trust in systems, encountering scientific information and potentially interacting with scientists in traditional media, online settings, or non-mass-mediated forms of science communication had no significant effect on trust in science and perceived trustworthiness of scientists. Accordingly, the influence of citizens’ experiences with science via science communication on trust in science seems rather modest, compared to other factors involved in our study. However, measuring people’s willingness to actively seek scientific information has also been conceptualized as science curiosity (Kahan et al., 2017) instead of self-reported experiences. Future research on the relation between people’s engagement or experiences with science and trust in science should further develop this concept and measurement.
According to our results, how people judge if a scientist is trustworthy and if they trust science is primarily dependent on general attitudes toward science. Yet interestingly, (communicative) experiences with science were positively correlated to science-related populism as well as positivistic attitudes toward science. This finding suggests that people with both skeptical and very positive attitudes seem to have frequent experiences with science, which resonates with findings in the context of science communication on climate change (Leiserowitz et al., 2012). Going beyond the frequency of communicative experiences, it is particularly important to examine the quality of experiences and the kind of sources used by people with skeptical and positivistic attitudes toward science. Our results support this endeavor, which is particularly driven by research on motivated reasoning and selective exposure (e.g. Maier et al., 2014; Druckman and McGrath 2019). Future research could differentiate among different kinds of experiences with science communication and evaluate their influence on trust in science as well as the trustworthiness of scientists.
A closer look at trustworthiness suggests that these perceptions are predicted by positivistic and science-related populist attitudes. Thus, when people hold positivistic attitudes and are less prone to science-related populist attitudes, they are more likely to deem scientists trustworthy–possibly because they think that scientists live up to their expectations, making epistemically warranted claims and providing benefits to society. These findings underline the importance of basic orientations toward the system, which also predict evaluative judgements about its representatives.
Taken together, our analysis highlights the importance of including several antecedents to uncover how trust in science manifests in and interacts with people’s belief system. However, the factors included in our analysis apparently did not capture all relevant factors needed to explain trust in science and the perception of scientists’ trustworthiness. The model fit suggests that there might be additional explanatory variables. One promising path for finding these may be to include measures of experiences with science that go beyond asking respondents about points of contact with science and scientists. Instead, future studies could focus on the quality of experiences with science, asking respondents, for example, whether they have had positive or even personally meaningful (communicative) experiences with science, scientists, or scientific communication because they received valuable information on an issue they did not previously understand or important information for individual decision-making. This is especially relevant considering that further results (Figure 3) indicate a positive correlation between (communicative) experiences with science and science-related populism, suggesting that it is not the quantity of experiences with science that shapes attitudes, but the specific sources people use to inform themselves and how those sources portray science and scientists. Furthermore, future studies could examine whether access points to science could be topic dependent and whether differences exist for citizens accessing scientific information in online vs offline channels. Results of our and further studies should then be scrutinized in terms of their implications for science communication practice: For example, science communicators may facilitate long-term positivistic attitudes toward science by providing citizens with further “access points” to science (e.g., via participatory science communication formats), and alleviate science-related populist attitudes by targeting dis- and misinformation spread by populist actors (e.g., via inoculation campaigns).
Furthermore, our model only addresses expectations toward science and scientists that reflect more stable views of and attitudes toward science and scientists. However, it might be the case that people often do not access a coherent and comprehensive representation of the scientific system when making these types of judgments on such surveys; instead, they might give more accurate evaluations of how they view science when they are given the context of specific scientific issues (Liu and Priest 2009; Hendriks, Kienhues, and Bromme 2016). As such, it would also be worthwhile to test the stability of the model identified in this paper for issues or scientific disciplines that involve more or less societal relevance (such as research on climate change vs black holes). Furthermore, experimental research has shown that trustworthiness judgments regarding scientists are based on inferential cues. Thus, trust in science may also depend on parameters in a given situation. For example, trust in science may fundamentally depend on the risks that scientific innovations have for a person or society (e.g. Cummings 2020). Furthermore, a study by Besley et al. (2017) showed that the willingness to see science as a legitimate source of knowledge decreased in the presence of industry-science connections. In sum, we must acknowledge that trust in science might often be more adaptive to a specific topic or information present at a certain point and accessible to the trustor than can be grasped by survey measures or statistically embedded in the present model.
Limitations
The study had several limitations that should be considered when interpreting the results. First, the results of the study may not be generalizable to other countries due to the specific context of Switzerland. Overall, Switzerland’s population has been shown to have rather positive conceptions of science and research (European Commission 2010) and trusts scientists (Wellcome Trust 2019). Our findings may thus not generalize to other countries where more critical, hostile, or divisive views prevail in society or in public debate (Rutjens et al., 2018).
Second, due to the cross-sectional approach, we were not able to detect causal relationships empirically. Especially for the relationship between trustworthiness, science-related populist attitudes, and positivistic attitudes, long-term examinations or experimental research would be a valuable supplement to the existing literature.
Third, we recognize a limitation in the one-item measurement of trust in science. As arguments about the conceptualization and measurement of trust in other contexts exist (Hamm et al., 2019), a definite measurement for trust in science (as a system) still needs to be validated (Reif and Guenther 2019).
Fourth, our measurement of experiences with science captured a very broad range of communicative experiences, which showed only an acceptable convergent validity. Future studies could focus on more specific communicative experiences and examine their influence on trust in science.
Conclusion
Taken together, our study contributes important findings to the literature on trust as well as to science communication. For science communication, we showed that a person’s frequency of communicative experiences with science did not significantly predict trust in science. The high explanatory value of basic orientations toward science suggests that future studies should focus more on these kinds of attitudes when explaining trust in systems. Hence, some promising ways to indirectly increase trust in science might be to communicate how scientific processes are able to produce epistemically reliable knowledge, and to highlight the function that science holds for providing information and solutions to problems on the societal scale.
Data Availability Statement
Publicly available datasets were analyzed in this study. This data can be found here: https://doi.org/10.23662/FORS-DS-1229-1.
Ethics Statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants’ legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
Author Contributions
NM, MS, and JM contributed to the conception of the survey. FW, NM, and FH wrote the manuscript and FW conducted the analysis. All authors contributed to the conception and refinement of the manuscript.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Acknowledgments
We acknowledge support from the Open Access Publication Fund of the University of Muenster.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomm.2021.822757/full#supplementary-material
Footnotes
1The use of printed newspapers, movies and Wikipedia for scientific information were excluded due to double loadings or low communality
2The items asking for radio and television use were slightly different between the waves. In 2016, we asked for radio/television in general and in 2019 for radio/television excluding the SRF. We nonetheless proceeded with the factor structure found in the data from 2016 in the following analyses
3The inter-factor correlations did not surpass the 0.44 level, which suggests good discriminant validity. The correlations of trust with the dimensions of trustworthiness range from 0.33 to 0.34 and were much lower than the intra-factor correlations which range from 0.52 to 0.69. This suggests a very good convergent and discriminant validity of the trustworthiness scale. The intra-factor correlations for communicative experiences with science were very mixed (ranging from 0.06 to 0.47), indicating a rather low convergent validity of the construct. However, all factor loadings were highly significant (Supplementary Appendix A3) and had the correct direction indicating an acceptable convergent validity (Anderson and Gerbing, 1988).
4In the constrained model, the covariance between the constructs is set to one. If the chi-square value for the unconstrained model is significantly lower, the discriminant validity is supported
5We constrained the error variance of the latent variable capturing mass-mediated experiences with science to equal zero, because it was not significantly different from zero. We tested this with z-tests and a likelihood-ratio test comparing the constrained and unconstrained model
6Specifications of the models: null model = all parameters relating the constructs fixed at zero; saturated model = all parameters relating the constructs are estimated; next most likely constrained model = paths from positivistic attitudes and science-related populism to trust subtracted, thereby eliminating the direct influence of orientations toward science on trust in science; unconstrained model = paths from the dimensions of experiences to science-related populism and positivistic attitudes added
7Scientific interest (β = 0.10, p < 0.01) and attention to media coverage on science (β = 0.08, p < 0.05) significantly predicted trust in science, but they showed very low effect sizes. Science-related populist attitudes were significantly predicted by the control variables age (β = 0.14, p < 0.001), education (β = −0.16, p < 0.001), political orientation (β = 0.14, p < 0.001), and scientific literacy (β = −0.17, p < 0.001). Perceived expertise of scientists was significantly predicted by political orientation (β = −0.10, p < 0.01), age (β = −0.09, p < 0.05), scientific literacy (β = 0.07, p < 0.05), and attention to media coverage (β = 0.10, p < 0.05). Scientists’ perceived integrity was significantly predicted by attention to media coverage (β = 0.10, p < 0.05). Attention to media coverage also predicted scientists’ perceived benevolence (β = 0.12, p < 0.01). Non-mediated experiences with science were predicted by scientific interest (β = 0.23, p < 0.001), attention to media coverage (β = 0.29, p < 0.001), age (β = −0.10, p < 0.01), and education (β = 0.11, p < 0.01). Highly interactive mass-mediated experiences were significantly predicted by age (β = −0.34, p < 0.001), scientific literacy (β = −0.14, p < 0.001), scientific interest (β = 0.11, p < 0.01), attention to media coverage (β = 0.12, p < 0.01), and education (β = −0.07, p < 0.05). Mediumly interactive mass-mediated experiences were significantly predicted by scientific interest (β = 0.27, p < 0.001), attention to media coverage (β = 0.40, p < 0.001), age (β = −0.10, p < 0.01), and education (β = 0.10, p < 0.01). Low interactive mass-mediated experiences were significantly predicted by age (β = 0.26, p < 0.001), attention to media coverage (β = 0.17, p < 0.001), scientific literacy (β = −0.11, p < 0.01), and proximity to science (β = −0.10, p < 0.01). Positivistic attitudes were significantly predicted by scientific interest (β = 0.16, p < 0.001), attention to media coverage (β = 0.18, p < 0.001), and scientific literacy (β = −0.10, p < 0.01)
References
Akin, H., Cacciatore, M. A., Yeo, S. K., Brossard, D., Scheufele, D. A., and Xenos, M. A. (2021). Publics' Support for Novel and Established Science Issues Linked to Perceived Knowledge and Deference to Science. Int. J. Public Opin. Res. 33 (2), 422–431. doi:10.1093/ijpor/edaa010
Akin, H., and Scheufele, D. A. (2017). “Overview of the Science of Science Communication,” in The Oxford Handbook of the Science of Science Communication. Editors K Hall Jamieson, D. M. Kahan, and D. A. Scheufele (Oxford: Oxford University Press), 25–33. doi:10.1093/oxfordhb/9780190497620.013.3
Anderson, A. A., Scheufele, D. A., Brossard, D., and Corley, E. A. (2012). The Role of Media and Deference to Scientific Authority in Cultivating Trust in Sources of Information about Emerging Technologies. Int. J. Public Opin. Res. 24 (2), 225–237. doi:10.1093/ijpor/edr032
Anderson, J. C., and Gerbing, D. W. (1988). Structural Equation Modeling in Practice: A Review and Recommended Two-step Approach. Psychol. Bull. 103 (3), 411–423. doi:10.1037/0033-2909.103.3.411
Bandalos, D. L. (2002). The Effects of Item Parceling on Goodness-Of-Fit and Parameter Estimate Bias in Structural Equation Modeling. Struct. Equation Model. A Multidisciplinary J. 9 (1), 78–102. doi:10.1207/S15328007SEM0901_5
Battiston, P., Kashyap, R., and Rotondi, V. (2021). Reliance on Scientists and Experts during an Epidemic: Evidence from the COVID-19 Outbreak in Italy. SSM - Popul. Health 13, 100721. doi:10.1016/j.ssmph.2020.100721
Bauer, M. W., Allum, N., and Miller., S. (2007). What Can We Learn from 25 Years of PUS Survey Research? Liberating and Expanding the Agenda. Public Underst Sci. 16 (1), 79–95. doi:10.1177/0963662506071287
Bauer, M. W., Petkova, K., and Boyadjieva., P. (2000). Public Knowledge of and Attitudes to Science: Alternative Measures that May End the "Science War". Sci. Technol. Hum. Values 25 (1), 30–51. doi:10.1177/016224390002500102
Bertsou, E., and Pastorella, G. (2017). Technocratic Attitudes: a Citizens' Perspective of Expert Decision-Making. West Eur. Polit. 40 (2), 430–458. doi:10.1080/01402382.2016.1242046
Besley, J. C., Lee, N. M., and Pressgrove, G. (2021). Reassessing the Variables Used to Measure Public Perceptions of Scientists. Sci. Commun. 43 (1), 3–32. doi:10.1177/1075547020949547
Besley, J. C., McCright, A. M., Zahry, N. R., Elliott, K. C., Kaminski, N. E., and Martin, J. D. (2017). Perceived Conflict of Interest in Health Science Partnerships. PLOS ONE 12 (4), e0175643. doi:10.1371/journal.pone.0175643
Besley, J. C. (2018). The National Science Foundation's Science and Technology Survey and Support for Science Funding, 2006-2014. Public Underst Sci. 27 (1), 94–109. doi:10.1177/0963662516649803
Blöbaum, B. (2021). “Some Thoughts on the Nature of Trust: Concept, Models and Theory,” in Trust and Communication: Findings and Implications of Trust Research. Editor B. Blöbaum (Cham, Switzerland: Springer), 3–28.
Bromme, R., and Goldman, S. R. (2014). The Public's Bounded Understanding of Science. Educ. Psychol. 49 (2), 59–69. doi:10.1080/00461520.2014.921572
Brossard, D. (2013). New Media Landscapes and the Science Information Consumer. Proc. Natl. Acad. Sci. 110 (3), 14096–14101. doi:10.1073/pnas.1212744110
Brossard, D., and Nisbet, M. C. (2007). Deference to Scientific Authority Among a Low Information Public: Understanding U.S. Opinion on Agricultural Biotechnology. Int. J. Public Opin. Res. 19 (1), 24–52. doi:10.1093/ijpor/edl003
Bruine de Bruin, W., and Bostrom, A. (2013). Assessing what to Address in Science Communication. Proc. Natl. Acad. Sci. 110 (Suppl. ment_3), 14062–14068. doi:10.1073/pnas.1212729110
Castanho Silva, B., Vegetti, F., and Littvay, L. (2017). The Elite Is up to Something: Exploring the Relation between Populism and Belief in Conspiracy Theories. Swiss Polit. Sci. Rev. 23 (4), 423–443. doi:10.1111/spsr.12270
Castell, S., Charlton, A., Clemence, M., Pettigrew, N., Pope, S., Quigley, A., et al. (2014). Public Attitudes to Science 2014: Main Report. Ipsos Mori.
Cummings, L. (2020). “Convincing a Sceptical Public: The Challenge for Public Health,” in Expanding Horizons in Health Communication. Editors B. Watson, and J. Krieger (Singapore: Springer), 6, 249–274. The Humanities in Asia. doi:10.1007/978-981-15-4389-0_12
Cyr, D., Hassanein, K., Head, M., and Ivanov, A. (2007). The Role of Social Presence in Establishing Loyalty in E-Service Environments. Interacting Comput. 19 (1), 43–56. doi:10.1016/j.intcom.2006.07.010
Dohle, S., Wingen, T., and Schreiber, M. (2020). Acceptance and Adoption of Protective Measures during the COVID-19 Pandemic: The Role of Trust in Politics and Trust in Science. Soc. Psychol. Bull. 15 (4), e4315. doi:10.32872/spb.4315
Druckman, J. N., and McGrath, M. C. (2019). The Evidence for Motivated Reasoning in Climate Change Preference Formation. Nat. Clim Change 9 (2), 111–119. doi:10.1038/s41558-018-0360-1
Dudo, A., Brossard, D., Shanahan, J., Scheufele, D. A., Morgan, M., and Signorielli, N. (2011). Science on Television in the 21st Century. Commun. Res. 38 (6), 754–777. doi:10.1177/0093650210384988
Engdahl, E., and Lidskog, R. (2014). Risk, Communication and Trust: Towards an Emotional Understanding of Trust. Public Underst Sci. 23 (6), 703–717. doi:10.1177/0963662512460953
European Commission (2010). Special Eurobarometer 340: Science and Technology. http://ec.europa.eu/public_opinion/archives/ebs/ebs_340_en.pdf.
European Commission. 2021. “Special Eurobarometer 516 “European Citizens’ Knowledge and Attitudes towards Science and Technology”. doi:10.2775/071577
Fasce, A., and Picó, A. (2019). Conceptual Foundations and Validation of the Pseudoscientific Belief Scale. Appl. Cognit Psychol. 33 (4), 617–628. doi:10.1002/acp.3501
Filc, D., and Lebel, U. (2005). The Post-Oslo Israeli Populist Radical Right in Comparative Perspective: Leadership, Voter Characteristics and Political Discourse. Mediterr. Polit. 10 (1), 85–97. doi:10.1080/1362939042000338854
Fiske, S. T., and Dupree, C. (2014). Gaining Trust as Well as Respect in Communicating to Motivated Audiences about Science Topics. Proc. Natl. Acad. Sci. 111 (Suppl. 4), 13593–13597. doi:10.1073/pnas.1317505111
Forchtner, B., Kroneder, A., and Wetzel, D. (2018). Being Skeptical? Exploring Far-Right Climate-Change Communication in Germany. Environ. Commun. 12 (5), 589–604. doi:10.1080/17524032.2018.1470546
Gauchat, G. (2011). The Cultural Authority of Science: Public Trust and Acceptance of Organized Science. Public Underst Sci. 20 (6), 751–770. doi:10.1177/0963662510365246
Geurkink, B., Zaslove, A., Sluiter, R., and Jacobs, K. (2020). Populist Attitudes, Political Trust, and External Political Efficacy: Old Wine in New Bottles. Polit. Stud. 68 (1), 247–267. doi:10.1177/0032321719842768
Gierth, L., and Bromme, R. (2020). Beware of Vested Interests: Epistemic Vigilance Improves Reasoning about Scientific Evidence (For Some People). PLoS ONE 15 (4), e0231387. doi:10.1371/journal.pone.0231387
Hamm, J. A., Smidt, C., and Mayer, R. C. (2019). Understanding the Psychological Nature and Mechanisms of Political Trust. PLOS ONE 14 (5), e0215835. doi:10.1371/journal.pone.0215835
Hardin, R. (2004). Trust and Trustworthiness. New York: Russell Sage Foundation. http://www.amazon.com/Trust-Trustworthiness-Russell-Foundation-Series/dp/0871543419.
Hargittai, E., Füchslin, T., and Schäfer, M. S. (2018). How Do Young Adults Engage with Science and Research on Social Media? Some Preliminary Findings and an Agenda for Future Research. Soc. Media + Soc. 4 (3), 205630511879772. doi:10.1177/2056305118797720
Hendriks, F., and Kienhues, D. (2019). “Science Understanding between Scientific Literacy and Trust: Contributions from Psychological and Educational Research,” in Science Communication. Editors A. Leßmöllmann, M. Dascal, and T. Gloning (Berlin, Boston: De Gruyter Mouton), 29–50. doi:10.1515/9783110255522-002
Hendriks, F., Kienhues, D., and Bromme, R. (2015). Measuring Laypeople's Trust in Experts in a Digital Age: The Muenster Epistemic Trustworthiness Inventory (METI). PLoS One 10 (10), e0139309. doi:10.1371/journal.pone.0139309
Hendriks, F., Kienhues, D., and Bromme, R. (2016). “Trust in Science and the Science of Trust,” in Trust and Communication in a Digitized World. Editor B. Blöbaum (Cham: Springer International Publishing), 239–251. Progress in IS. doi:10.1007/978-3-319-28059-2_8
Hmielowski, J. D., Feldman, L., Myers, T. A., Leiserowitz, A., and Maibach, E. (2014). An Attack on Science? Media Use, Trust in Scientists, and Perceptions of Global Warming. Public Underst Sci. 23 (7), 866–883. doi:10.1177/0963662513480091
Howell, E. L., Wirz, C. D., Scheufele, D. A., Brossard, D., and Xenos, M. A. (2020). Deference and Decision-Making in Science and Society: How Deference to Scientific Authority Goes beyond Confidence in Science and Scientists to Become Authoritarianism. Public Underst Sci. 29, 800–818. doi:10.1177/0963662520962741
Hu, L. t., and Bentler, P. M. (1999). Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives. Struct. Equation Model. A Multidisciplinary J. 6 (1), 1–55. doi:10.1080/10705519909540118
Huber, B., Barnidge, M., Gil de Zúñiga, H., Liu, J., and Liu, J. (2019). Fostering Public Trust in Science: The Role of Social Media. Public Underst Sci. 28 (7), 759–777. doi:10.1177/0963662519869097
Kahan, D. M., Braman, D., Slovic, P., Gastil, J., and Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nat. Nanotech 4 (2), 87–90. doi:10.1038/nnano.2008.341
Kee, H. W., and Knox, R. E. (1970). Conceptual and Methodological Considerations in the Study of Trust and Suspicion. J. Conflict Resolution 14 (3), 357–366. doi:10.1177/002200277001400307
Keren, A. (2018). The Public Understanding of what? Laypersons' Epistemic Needs, the Division of Cognitive Labor, and the Demarcation of Science. Philos. Sci. 85 (5), 781–792. doi:10.1086/699690
Kline, R. B. (1998). Principles and Practice of Structural Equation Modeling. New York: Guildford Press.
Krämer, B., and Klingler, M. (2020). “A Bad Political Climate for Climate Research and Trouble for Gender Studies: Right-Wing Populism as a Challenge to Science Communication,” in Perspectives on Populism and the Media. Editors B. Krämer, and C. Holtz-Bacha (Baden-Baden: Nomos Verlagsgesellschaft mbH & Co. KG), 253–272. doi:10.5771/9783845297392-253
Landrum, A. R., Eaves, B. S., and Shafto, P. (2015). Learning to Trust and Trusting to Learn: A Theoretical Framework. Trends Cogn. Sci. 19 (3), 109–111. doi:10.1016/j.tics.2014.12.007
Leiserowitz, A., Maibach, E., Roser-Renouf, C., and Hmielowski, J. D. (2012). Global Warming’s Six Americas, March 2012 & Nov. 2011. New Haven, CT: Yale Project on Climate Change Communication.
Lewicki, R. J., and Bunker, B. B. (1995). “Trust in Relationships: A Model of Trust Development and Decline,” in Conflict Cooperation and Justice: Essays Inspired by the Work of Morton Deutsch. Editors B. B. Bunker, and J. Z. Rubin (San Francisco: Jossey-Bass), 133–173.
Liu, H., and Priest, S. (2009). Understanding Public Support for Stem Cell Research: Media Communication, Interpersonal Communication and Trust in Key Actors. Public Underst Sci. 18 (6), 704–718. doi:10.1177/0963662508097625
Maier, M., Rothmund, T., Retzbach, A., Otto, L., and Besley, J. C. (2014). Informal Learning through Science Media Usage. Educ. Psychol. 49 (2), 86–103. doi:10.1080/00461520.2014.916215
Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. Acad. Manage. Rev. 20 (3), 709–734. doi:10.2307/258792
Mede, N. G., and Schäfer, M. S. (2021). Science-Related Populism Declining during the COVID-19 Pandemic: A Panel Survey of the Swiss Population before and after the Coronavirus Outbreak. Public Underst Sci.. doi:10.1177/09636625211056871
Mede, N. G., and Schäfer, M. S. (2020). Science-Related Populism: Conceptualizing Populist Demands toward Science. Public Underst Sci. 29 (5), 473–491. doi:10.1177/0963662520924259
Mede, N. G., Schäfer, M. S., and Füchslin, T. (2021). The Scipop Scale for Measuring Science-Related Populist Attitudes in Surveys: Development, Test, and Validation. Int. J. Public Opin. Res. 33 (2), 273–293. doi:10.1093/ijpor/edaa026
Metag, J. (2020). What Drives Science Media Use? Predictors of Media Use for Information about Science and Research in Digital Information Environments. Public Underst Sci. 29 (6), 561–578. doi:10.1177/0963662520935062
Miller, J. M., Saunders, K. L., and Farhart, C. E. (2016). Conspiracy Endorsement as Motivated Reasoning: The Moderating Roles of Political Knowledge and Trust. Am. J. Polit. Sci. 60 (4), 824–844. doi:10.1111/ajps.12234
National Science Board (2018). Science and Engineering Indicators 2018. Alexandria, VA: National Science Foundation. NSB-2018-1.
Nisbet, M. C., Scheufele, D. A., Shanahan, J., Moy, P., Brossard, D., Lewenstein, B. V., et al. (2002). Knowledge, Reservations, or Promise. Commun. Res. 29 (5), 584–608. doi:10.1177/009365002236196
Origgi, G. (2004). Is Trust an Epistemological Notion. Episteme 1 (1), 61–72. doi:10.3366/epi.2004.1.1.61
Pardo, R., and Calvo, F. (2002). Attitudes toward Science Among the European Public: A Methodological Analysis. Public Underst Sci. 11 (2), 155–195. doi:10.1088/0963-6625/11/2/305
Priest, S. H., Bonfadelli, H., and Rusanen, M. (2003). The "Trust Gap" Hypothesis: Predicting Support for Biotechnology across National Cultures as a Function of Trust in Actors. Risk Anal. 23 (4), 751–766. doi:10.1111/1539-6924.00353
Prpić, K. (2011). Science, the Public, and Social Elites: How the General Public, Scientists, Top Politicians and Managers Perceive Science. Public Underst Sci. 20 (6), 733–750. doi:10.1177/0963662510366363
Pruysers, S. (2021). A Psychological Predisposition towards Populism? Evidence from Canada. Contemp. Polit. 27 (1), 105–124. doi:10.1080/13569775.2020.1851930
Reif, A., Kneisel, T., Schäfer, M., and Taddicken, M. (2020). Why Are Scientific Experts Perceived as Trustworthy? Emotional Assessment within TV and YouTube Videos. MaC 8 (1), 191–205. doi:10.17645/mac.v8i1.2536
Reif, A., and Guenther, L. (2019). “What Representative Surveys Tell Us about Public (Dis)Trust in Science: A Re-interpretation and Systematization of Survey Items and Open-Ended Questions,” in Annual Conference of the International Communication Association (ICA) (Washington D.C., USA: International Communication Association).
Reif, A. (2021). “Mehr Raum Für Vertrauen? Potenzielle Veränderungen Des Vertrauens in Wissenschaft Durch Partizipative Online-Umgebungen [More Space for Trust? Potential Changes in Trust in Science through Participatory Online Environments],” in Räume Digitaler Kommunikation. Lokalität – Imagination – Virtualisierung. Editors T. Döbler, C. Pentzold, and C. Katzenbach (Köln: Herbert von Halem), 210–243.
Roberts, M. R., Reid, G., Schroeder, M., and Norris, S. P. (2013). Causal or Spurious? the Relationship of Knowledge and Attitudes to Trust in Science and Technology. Public Underst Sci. 22 (5), 624–641. doi:10.1177/0963662511420511
Rosseel, Y. (2012). Lavaan: AnRPackage for Structural Equation Modeling. J. Stat. Soft. 48 (2), 1–93. doi:10.18637/jss.v048.i02
Rousseau, D. M., Sitkin, S. B., Burt, R. S., and Camerer, C. (1998). Not so Different after All: A Cross-Discipline View of Trust. Amr 23 (3), 393–404. doi:10.5465/amr.1998.926617
Rutjens, B. T., Heine, S. J., Sutton, R. M., and van Harreveld, F. (2018). “Attitudes towards Science,” in Advances in Experimental Social Psychology (Amsterdam; Heidelberg: Elsevier), 57, 125–165. doi:10.1016/bs.aesp.2017.08.001,
Saarinen, A., Koivula, A., and Keipi, T. (2020). Political Trust, Political Party Preference and Trust in Knowledge-Based Institutions. Ijssp 40 (1/2), 154–168. doi:10.1108/IJSSP-06-2019-0113
Saurette, P., and Gunster, S. (2011). Ears Wide Shut: Epistemological Populism, Argutainment and Canadian Conservative Talk Radio. Can. J. Pol. Sci. 44 (1), 195–218. doi:10.1017/S0008423910001095
Schäfer, M. S. (2016). Mediated Trust in Science: Concept, Measurement and Perspectives for the `science of Science Communication'. Jcom 15 (5), C02–C07. doi:10.22323/2.15050302
Schulz, A., Müller, P., Schemer, C., Wirz, D. S., Wettstein, M., and Wirth, W. (2018). Measuring Populist Attitudes on Three Dimensions. Int. J. Public Opin. Res. 30 (2), 316–326. doi:10.1093/ijpor/edw037
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., et al. (2010). Epistemic Vigilance. Mind Lang. 25 (4), 359–393. doi:10.1111/j.1468-0017.2010.01394.x
Su, L. Y.-F., Akin, H., Brossard, D., Scheufele, D. A., and Xenos, M. A. (2015). Science News Consumption Patterns and Their Implications for Public Understanding of Science. Journalism Mass Commun. Q. 92 (3), 597–616. doi:10.1177/1077699015586415
Sztompka, Piotr. (2000). Trust: A Sociological Theory. Cambridge, UK; New York, NY: Cambridge University Press.
Trench, B. (2008). “Towards an Analytical Framework of Science Communication Models,” in Communicating Science in Social Contexts: New Models, New Practices. Editors D. Cheng, M. Claessens, T. Gascoigne, J. Metcalfe, B. Schiele, and S. Shi (Dordrecht: Springer Netherlands), 119–135. doi:10.1007/978-1-4020-8598-7_7
von Roten, F. C. (2009). European Attitudes towards Animal Research. Sci. Techn. Soc. 14 (2), 349–364. doi:10.1177/097172180901400207
Wellcome Trust (2019). Wellcome Global Monitor 2018. https://wellcome.ac.uk/reports/wellcome-global-monitor/2018.
Wissenschaft im Dialog (2018). Science Barometer Germany. https://www.wissenschaft-im-dialog.de/en/our-projects/science-barometer/science-barometer-2018/.
Wuttke, A., Schimpf, C., and Schoen, H. (2020). When the Whole Is Greater Than the Sum of its Parts: On the Conceptualization and Measurement of Populist Attitudes and Other Multidimensional Constructs. Am. Polit. Sci. Rev. 114 (2), 356–374. doi:10.1017/S0003055419000807
Xiao, C. (2013). Public Attitudes toward Science and Technology and Concern for the Environment. Environ. Behav. 45 (1), 113–137. doi:10.1177/0013916511414875
Zmerli, S., Newton, K., and Montero, J. R. (2007). “Trust in People, Confidence in Political Institutions, and Satisfaction with Democracy,” in Citizenship and Involvement in European Democracies: A Comparative Analysis. Editors J. W. van Deth, J. R. Montero, and W. Anders (London, UK: Routledge), 35–65.
Keywords: survey, structural equation model—SEM, populist attitudes, trust in science, attitudes towards science
Citation: Wintterlin F, Hendriks F, Mede NG, Bromme R, Metag J and Schäfer MS (2022) Predicting Public Trust in Science: The Role of Basic Orientations Toward Science, Perceived Trustworthiness of Scientists, and Experiences With Science. Front. Commun. 6:822757. doi: 10.3389/fcomm.2021.822757
Received: 26 November 2021; Accepted: 21 December 2021;
Published: 24 January 2022.
Edited by:
Anabela Carvalho, University of Minho, PortugalReviewed by:
Joanna K. Huxster, Eckerd College, United StatesDouglas Ashwell, Massey University Business School, New Zealand
Copyright © 2022 Wintterlin, Hendriks, Mede, Bromme, Metag and Schäfer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Florian Wintterlin, Zmxvcmlhbi53aW50dGVybGluQHVuaS1tdWVuc3Rlci5kZQ==
†ORCID: Florian Wintterlin, orcid.org/0000-0002-1837-6079; Friederike Hendriks, orcid.org/0000-0003-1023-8103; Niels G. Mede, orcid.org/0000-0001-5707-7568; Rainer Bromme, orcid.org/0000-0002-6123-7269; Julia Metag, orcid.org/0000-0003-4328-6419; Mike S. Schäfer, orcid.org/0000-0002-0847-7503