- 1Department of Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
- 2Department of Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
- 3Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
- 4Department of Psychology, University of Muenster, Muenster, Germany
Scientific debates are, in an epistemological sense, argumentative approaches aimed at coming to the most appropriate conclusion. However, as these debates sometimes involve interpersonal rather than content-driven attacks (e.g., an argument between scientific experts might involve personal dislike), the following question arises: How do such communication behaviors affect people’s perception of the argument? In an empirical study, we presented prospective teachers (N = 222) with a newspaper article about two scientific experts controversially discussing the pros and cons of a fictional vocabulary training program. Using a 1 × 2 between-subject design, the article contained either a neutral or an incivil discourse style. The dependent measures evaluated how participants perceived the experts’ trustworthiness and how they viewed the practical relevance of the scientific topic at hand. Results revealed that participants who read the neutral-style discourse perceived the two experts as having more expertise, higher integrity, and higher benevolence than participants who read the incivil-style discourse. However, the groups did not differ in their ratings of how beneficial the scientific findings might be in the classroom. Overall, this study shows that discourse style indeed influences the perceived trustworthiness of experts, in that it might be damaged in heated debates. The study therefore suggests that the scientific community’s methodological and social conventions should be addressed in higher education, in this case teacher education, as understanding these conventions is important for substantially evaluating heated scientific debates.
Introduction
In the scientific community, true scientific knowledge is, in conjunction with other practices, determined through discussions and arguments, namely scientific debates. For example, at the beginning of an empirical research project, researchers develop ideas and collect data; after they submitted their results to a journal in written form, their ideas are critically discussed by other scientists (Douglas, 2015). During a formal review process, other experts will reflect on the results and discuss them with the authors, sometimes implicitly via the journal’s editor, sometimes directly via rounds of reviews. Further, discussions of research results take place at conferences as well as within social media (Peters, 2013). A piece of new scientific knowledge is deemed to be true by the scientific community if it survives these discussions (Kitcher, 2001); that is, a group of scientists has formed a consensus. In this sense, scientific debates are an inherent epistemic feature of how knowledge is produced.
Nevertheless, as these debates are enacted by social subjects, scientific debates can also be considered social interactions. Therefore, features of social interactions, such as interpersonal attacks or rude behavior may occur in such debates. In consequence, the controversies arising in scientific debates may be twofold: Beyond the topic-inherent scientific controversy, a scientific debate may also be fueled by interpersonal controversies. Usually, people view the scientific knowledge they deal with as being intimately linked with its social source (Jenkins, 1999), so they might overlook the epistemic reasons for scientific debates. Yet, individuals should be aware that scientific debates (regardless of their civility) are required for achieving scientific truths. This awareness should also entail an understanding that scientists’ uncertainty does neither imply unreliability (Kienhues et al., 2020) nor represent an excuse not to act based on the available evidence (Tversky and Shafir, 1992). Such a nuanced understanding of scientific debates is a cornerstone of individuals scientific literacy, as it encourages people to value scientific evidence as the most rational approach for answering questions in their personal or professional lives.
Improving people’s (professional) decision-making by having them consider the best available scientific knowledge is an important goal for higher education, such as in medical education or teacher education. It is particularly crucial for teacher education, because evidence-based teaching and teacher education is not straightforward (Murphy, 2015) and often falls upon deaf ears (Zlatkin-Troitschanskaia, 2016; Alexander, 2018). One reason for this is that people devaluate the scientific quality of educational knowledge. Educational research, as a social science, is often perceived to provide rather weak and uncertain knowledge (in comparison to natural sciences, e.g., Hofer, 2000; Lonka et al., 2020), and disciplines contributing to educational research, such as psychology, are perceived “as largely nonscientific and as lacking in scientific rigor” (Lilienfeld, 2012, p. 114). Nevertheless, to ensure that students receive the best education possible, teaching should be evidence based (Bromme et al., 2016; National Research Council, 2001; Slavin, 2002), meaning that teachers have “to identify approaches and practices that work to promote learning and performance” (Alexander, 2018, p. 158). Given this tension, it would be interesting to understand how individuals view controversies about evidence from educational research. Especially prospective teachers, who deal with educational evidence and accompanying debates in their studies, need to understand what scientific evidence is and how it evolves, which involves understanding the epistemic and the interpersonal reasons for scientific debates.
In this study, we aim to investigate how prospective teachers understand scientific debates, especially how their epistemic judgments are influenced by controversies that are intertwined with social interaction.
Scientific Controversies and Debates in Everyday and Professional Life
Controversies are vital for scientific progress, as they cause evidence to be revisited and mistakes to be uncovered (Paletz et al., 2016). In an epistemological sense, opposing viewpoints represent argumentative approaches toward finding reliable knowledge (Lakatos and Musgrave, 1970). Importantly, to be scientifically literate and to participate in a democratic society, individuals must be able to navigate such controversies where there is not yet scientific consensus (Kolstø, 2001). This entails to understand why controversies between scientists occur and that they are essential for achieving scientific truth. For scientific controversies that are relevant to the public, disagreements among scientists are often publicly accessible. Recent examples of such public disagreements among scientists (and the evolving knowledge that comes along with them) are the discussions about face masks or ibuprofen in the context of the Covid-19 outbreak (Chan and Yuen, 2020; Sodhi and Etminan, 2020).
However, general science education often does not prepare the public to handle scientific controversies productively: It seldom highlights science as argumentative in nature, but instead portrays science as the mere accumulation of undebated facts (Osborne, 2010), and disregards the productivity of moments of uncertainty for science understanding (Chen et al., 2019). Thus, to people who see scientific findings as undebatable factual information, the idea that scientists use consensus to create scientific knowledge might seem underhanded or manipulative. Consequently, attempts to cast doubt on science might fall on fertile ground; people may be vulnerable to post-truth attempts where partisan actors try to attack and devalue science using the very idea that scientific knowledge is created through scientific consensus (McKee and Diethelm, 2010; Oreskes and Conway, 2010). Thus, if individuals are not able to navigate scientific controversies, they may neither value scientific evidence nor act in accordance with it.
Individuals’ Evaluation of Scientific Debates and Their Protagonists
Reasons for disagreements in science can be multifarious and, thus, may not only refer to epistemic conflicts (e.g., methodological problems in experiments, new and evolving knowledge) but may also involve interpersonal conflicts, especially when scientists are clearly at odds with one another [(case in point, the famous disagreement between Leibniz and Newton (Hall, 1980)]. Such reasons for disagreement may also partly evolve from the communicative goals of scientific debates, which are not always to co-construct consensus but sometimes to convince someone or to win a debate (Leitão, 2000; Fisher et al., 2018).
We have mentioned above that scientific debates can also be considered social interactions, differing in their civility (Rowe, 2015). Incivility is used as an umbrella term including rudeness, aggressiveness, and impoliteness. That is, a scientific debate may involve a kind of personal tone, as scientific experts might dislike each other and be rude to one another. In consequence, someone who is confronted with a scientific debate may not only encounter opposing views but may also be introduced to interpersonal conflicts that result in ad hominem attacks (Carlson, 2017). Various studies show that such ad hominem attacks can influence individuals’ evaluation of scientific debates and their protagonists. For example, the civility of an interaction influences (among other issues) whether a bystander perceives the protagonists as being rational (Popan et al., 2019). Participants watching a video of a scientific debate evaluated a scientist using an aggressive discourse style as less credible, less competent, less sincere, less benevolent, and less likable (König and Jucks, 2019) than a scientist using a neutral discourse style. Further, ad hominem arguments (e.g., questioning a researcher’s motives) seem to challenge the perceived credibility of the attacked scientist just as much as arguments targeting the empirical basis of their claims (Barnes et al., 2018; Gierth and Bromme, 2020). That is, incivility in scientific debates can have detrimental effects. Further evidence for such effects comes from research in political science: Participants agreed less with verbally aggressive political speakers and perceived them as less credible than nonaggressive speakers (Nau and Stewart, 2013). In contrast to incivil debates, civil discussions have positive effects, e.g., increasing participants’ willingness to vaccinate their future children (Jennings and Russell, 2019). That is, watching an incivil or impolite scientific debate influences how individuals evaluate the content of that debate and its protagonists.
Practical Relevance of Science
Typically, laypeople engage with conflicting scientific arguments in order to reach a solid answer to a specific question, e.g., when Googling the side effects of a vaccination (Bromme and Goldman, 2014; Brummernhenrich and Jucks, 2019). Individuals want to reach the most reasonable conclusion. On the other hand, one’s political orientation and analytical thinking are related to one’s agreement with scientific conclusions and factual statements (Lobato and Zimmerman, 2019; Medlin et al., 2019). One strategy people use to reject scientific evidence that does not align with their own beliefs on a topic is to question the perceived potency of the scientific methods that were used to investigate that topic (Munro, 2010). Perceived potency refers to the degree to which science is capable of providing reliable knowledge in response to the problem under consideration. That is, in how far science can really address the problem at hand. Inspired by Munro’s findings, we assume that discourse style influences this perceived potency of science.
Regarding educational practice, prospective teachers (teacher students) may evaluate science not only in terms of its general potency but also regarding its relevance for their teaching practice (Zeuch and Souvignier, 2015; Merk et al., 2017). Especially nowadays teachers need to be able to evaluate empirical evidence, and it is of practical relevance to scrutinize whether specific styles of discourse in a scientific debate differently influence the perceived practical importance of an educational science issue. Specifically, it would be important to know whether prospective teachers overlook the potency of certain scientific findings for their forthcoming professional careers when these findings are discussed in an incivil manner.
Epistemic Trustworthiness
In our society, knowledge is highly specialized and unevenly distributed, and it is almost impossible for laypersons to directly evaluate such specialized knowledge (Kitcher, 1990; Bromme and Goldman, 2014). Therefore, instead of evaluating the scientific evidence itself, individuals often select the best arguments by assessing whether the person providing the information is a reliable and credible source; that is, individuals might evaluate a science communicator’s epistemic trustworthiness (Hendriks and Kienhues, 2019). Such judgments focus on three features: an expert’s expertize, integrity, and benevolence (Cummings, 2014; Hendriks et al., 2015). Expertize refers to the extent that someone is truly knowledgeable and trained in her domain, such as methodological competencies; integrity indicates that an expert adheres to the rules and norms of science; and benevolence suggests that an expert does not pursue personal benefit or aims but focuses on the interests of others. Various studies have revealed that individuals are capable of nuanced trustworthiness judgments. For example, Jensen (2008) showed that individuals’ judgments are sensitive to scientists’ disclosures of uncertainty: They showed that messages are perceived as more trustworthy when scientists reported study limitations as opposed to when scientists did not report such limitations. Further, Hendriks et al. (2016) showed that trustworthiness judgments differ depending on whether a scientist self-discloses the limitations of his work or another scientist discloses these limitations. Research by König and Jucks (2019) indicated that an aggressive language (vs. neutral language) style negatively affects trustworthiness judgments. Trustworthiness judgments are crucial, as they lead to informed trust; that is, individuals will not trust blindly.
Scientists’ Ethos
When laypeople observe scientific discourse, they might partly judge it based on their assumptions of how scientists should or should not behave. Such idealized behaviors have been described in the fields of sociology and philosophy of science by Merton (1942) and Mitroff (1974). Merton’s norms (1942) refer to the ethos of science and capture views of idealized scientific practice. They, for example, include that scientists are only motivated by the pursuit of knowledge but not by personal gain, and always work objectively (Mitroff, 1974). counter-norms serve as counterpoints to Merton’s norms and describe practices that scientists ideally should not do, such as to compete with others for recognition of achievements. These are obviously ideal norms and not descriptions of the actual motives and behaviors of scientists. Nevertheless, as norms they might have constraining effects on scientists, (e.g., such norms differ between scientific faculty and undergraduates as shown by Kardash and Edwards (2012); here, scientific faculty more strongly advocated Merton’s norms than did undergraduates). The explication of such norms and counter-norms is also helpful for analyzing empirically how laypersons generally and also university students (in our case, teacher students) think about how scientists should behave.
Present Study
In the present study, we aimed to investigate the everyday situation that people need to make sense of science-based information they come across in their personal or professional life. We specifically aimed to study the reception of different styles of discourse in a scientific debate on an educational topic. Therefore, we investigated how discourse style affects prospective teachers’ perception of the debate and of the scientists involved in the debate as well as how it affects how they view educational science. In a 1 × 2 experimental group design, we presented a newspaper article about two educational experts debating a fictitious computer program for vocabulary training. These experts adopted either a neutral or an incivil discourse style. Note, a neutral discourse style refers to a communication without any elements of attack and aggression. We chose neutral instead of civil for the wording of the (control) group in order to explain that it not necessary to include expressions of mutual personal appreciation or esteem in order to have a civil discourse–at least not within a scientific discourse.
Our hypotheses derive from the distinction between the epistemic and the social sides of scientific discourse outlined above: While scientific discourse can be conceived as an epistemic endeavor to constitute knowledge, it can also be conceived as an interpersonal conflict where scientists are at odds with one another and are fighting because of personal differences between the debaters. We are interested in whether this interpersonal conflict might somehow mask the nevertheless existing fact that discourse is necessary to achieve scientific truth.
In consequence, we first of all hypothesized that an incivil discourse style would influence conflict explanation and strengthen participants’ assumption that the conflict stemmed from personal differences between the debaters rather than from reason-based aspects (e.g., methodological differences) (H1).
Furthermore, we expected an incivil discourse style to polarize participants’ opinions about the debate topic (as it might be perceived as rather opinion-based than reason-based), hence leading to more extreme opinion ratings (H2) and higher confidence ratings (H3).
Our hypotheses also take into account how discourse style might affect participants’ views on educational science. Regarding the potency of science, we expected an incivil discourse style to make participants think science is less equipped to answer the question of the debate (H4).
We also assumed that participants who read the incivil discourse style would see less practical benefit of science; specifically, we hypothesized that they would find science less useful for their teaching practice (H5).
Concerning the epistemic trustworthiness of the scientists involved in the debate, we hypothesized that participants who read the incivil discourse style would place less epistemic trust in the debaters (H6).
Further, regarding participants’ assumptions about scientific norms, we expected that an incivil discourse style would lead participants to devalue scientific ethos; that is, we thought participants reading the incivil article would rate scientists’ ethos as being aligned more strongly with counter-norms than with norms (H7).
Methods
Participants
An a priori power analysis in G*Power (Faul et al., 2007) for an independent two-tailed t-test with α = 0.05 as significance level yielded a minimal sample size of N = 210 in order to detect a medium effect of (Cohen, 1988) d = 0.5 with a power of 0.95. We recruited N = 245 German-speaking teacher students for an online study which was advertised via Facebook groups for teacher students across Germany and in lectures for teacher students at the University of Münster. A short demographic questionnaire collected information about participants’ gender, age, the type of school they planned to teach in after university, the subjects they were currently studying, the university at which they were studying, and how many semesters they had studied so far (summed number of bachelor and master semesters). We excluded participants who were not currently studying to become teachers, who did not report at the end of the study that they answered all questions honestly and attentively (Aust et al., 2013), those who completed the study implausibly fast (i.e., 1 SD faster than the mean time it took five trained readers to complete a test run of the study) and those who did not focus on the survey page throughout the whole session, leaving N = 222 for final analysis (see Supplementary Table SA,SB). After completing the survey, participants had the chance to win one of eleven booksellers’ vouchers (1 × 50€, 10 × 15€). The study was approved by the ethics commission of the University of Münster.
Materials
The whole study was conducted online via Unipark (Questback GmbH, 2018), and all materials and questionnaires were presented in German.
Debate Scenario
In both conditions, the newspaper articles featured the same brief information about the program PAVLOV and the debate on it. Subsequently, the article continued to describe the arguments of two educational scientists, (named Dr. Frank Völkel and Dr. Frederick Mische) each taking turns to provide their viewpoint and the evidence for it. In both conditions the content and wording was the same, except in the incivil condition the verbs were exchanged and accompanying adverbs were inserted in order to express that the debaters had an aggressive stance toward each other. For example, the neutral version stated “Völkel replied,” while the incivil version read “Völkel retorted aggressively.” The words in question were generated based on synonyms and antonyms as provided by a dictionary of the German language (Questback GmbH, 2018). For example, in generating the incivil version of the text, it was important that the words clearly described aggression directed toward the other debater (vs. general, undirected negative emotion). We wanted to clarify that the emotional language one debater expressed was due to the discussion with the other debater and not a result of events unrelated to the panel discussion. In the final version, both texts were of comparable length and both debaters had an equal share of the discussion. Participants read at their own pace. At the end of the article, participants were informed that they would not be able to read the article again once beginning the following questionnaires. Both versions of the newspaper article are provided in Supplementary Material.
Task Instructions
Via a random generator implemented in the survey, participants were assigned to one of two conditions: either a neutral debate scenario or an incivil debate scenario. In both conditions, participants were instructed to imagine being a teacher at a school that was deciding whether to use a new vocabulary training program (PAVLOV: Programmed Associative Visualization Learning of Vocabularies). Since a school’s choice on media use affects every teacher, the whole staff was involved in the decision. Participants were told that the principal requested that they carefully read a newspaper article on a panel discussion that took place as part of a congress on educational sciences. During the congress, two educational scientists debated the evidence for and against PAVLOV, and the principal was awaiting each participant’s opinion on whether the program should be used in classes. Scenario descriptions included a short introduction to the congress at which the debate took place. For both conditions, scenario descriptions read the same, except in the incivil condition participants were informed that they were about to read a heated debate (neutral version: debate).
Measures
All measures and how they relate to our hypotheses are listed in Table 1.
Conflict Explanation
After reading the article, participants were asked to provide an explanation for the conflict they just had read about. To induce reasoning about the conflict, we first asked the following open question: In your opinion, what are the reasons for the conflict that emerged in the panel discussion? Participants were instructed to provide their perspective in short sentences. The free responses were not analyzed further. Participants then answered four closed items about their explanation for the conflict (cf. H1). For each item, they indicated their agreement on a 7-point Likert-type scale (1 = do not agree at all, 7 = fully agree). The first item stated that the debaters referred to different research findings; the second item stated that there was a personal conflict between the debaters; the third item stated that the debaters referred to different effects the program had; the fourth item stated that the debaters focused on different goals when evaluating the program.
Opinion About PAVLOV
Participants were asked whether the vocabulary training program should be used at the imaginary school (definitely no - definitely yes, opinion rating, cf. H2). In a second item, participants indicated how much confidence they had in their previously stated opinion (not confident at all - very confident, confidence rating, cf. H3). For both items, they provided their answers using a slider ranging from 1 to 100 (numbers were not shown).
Potency of Science
One item adapted from Munro (2010) was used to capture the perceived potency of science (H4): Participants indicated whether they believed that the question about using PAVLOV could ultimately be answered unambiguously with scientific research on a 7-point Likert-type scale (1 = do not agree at all, 7 = fully agree).
Practical Benefit of Science
To assess participants’ perceived benefit of educational sciences in day-to-day teaching (cf. H5), we asked them to complete the subscale Benefit of Science for Professional Practice as implemented in the questionnaire about scientific thinking of pre-service teachers by Zeuch and Souvignier (2015). For nine statements, participants indicated their agreement on a 7-point Likert-type scale (1 = do not agree at all, 7 = fully agree), e.g., In the classroom, it would be best if teachers rely on their experiences instead of findings from the educational sciences. (Reverse scored; original questionnaire in German). The authors report a Cronbach’s α of 0.76, an item discrimination power range of 0.33–0.52 and a mean score of 4.45 (SD = 0.76). Participants were asked to refer to the field of educational science, so we adapted the original items from Zeuch and Souvignier (2015) by changing science to educational science.
Muenster Epistemic Trustworthiness Inventory (METI)
Participants provided their judgment on the debaters’ trustworthiness in the Muenster Epistemic Trustworthiness Inventory (cf. H6, Hendriks et al., 2015). They did so by choosing between 14 word-pairs on a 7-point Likert-type scale, presented as semantic differentials (e.g., competent vs. incompetent). Three mean scores were computed for each of the following sub-dimensions: expertize (Six items), integrity (four items), and benevolence (four items). Hendriks et al. (2015) report a Cronbach’s α of 0.91 for expertize, 0.82 for integrity, and 0.90 for benevolence. Participants were instructed to rate both debaters simultaneously, as we were interested in their overall impression of the debate as source of information.
Scientists’ Ethos
Additionally, participants filled in a questionnaire reflecting their perception of scientists’ ethos (cf. H7). We translated into German the version used by Kardash and Edwards (2012), which is a slight adaption of the questionnaire proposed by Anderson and Louis (1994). In eight items, participants indicated how much they thought statements about norms and counter-norms described actual scientific practice on a 5-point Likert-type scale (1 = not representable at all, 5 = fully representable), e.g., Scientists are generally motivated by the desire for knowledge and discovery, and not by the possibility of personal gain (norm of disinterestedness). Each norm proposed by Merton (1942) and each counter-norm proposed by Mitroff (1974) was represented by one item, and participants were reminded to refer to the field of educational science. For the original version of the questionnaire, Anderson and Louis (1994) report a moderate reliability of 0.49 for the norm scale and a reliability of 0.64 for the counter-norm scale. This may be due to the fact that the scale consists of several (counter-)norms, each represented by one item. That means that different constructs are reflected within the same scale, which might lower the internal consistency. Unfortunately, no indices are reported by Kardash and Edwards (2012).
Procedure
Participants gave their informed consent and filled in the demographic questionnaire. They were then introduced to either the neutral or the incivil debate scenario description. Participants read the corresponding newspaper article and then expressed their opinion on PAVLOV. Afterward, they answered the items on conflict explanation, the METI, the questionnaire about the practical benefit of science and the questionnaire on scientists’ ethos. In a final item, participants indicated whether they had honestly and attentively answered the items or whether we should discard their data. Lastly, we thanked participants for taking part in the study and debriefed them. If they wished, participants could then follow a link to a separate survey where they could provide their email addresses for the lottery of booksellers’ vouchers.
Data Analysis
We used R (Version 3.6.0; R Core Team, 2018) for all analyses, which were carried out using α = 0.05 as significance level. To assess whether METI subscale correlations significantly differed between experimental conditions, we compared z−standardized correlation coefficients r with respect to the sample size, as implemented in the R-package cocor (Version 1.1–3; Diedenhofen and Musch, 2015).
Results
In the following, the results of our statistical analyses are described according to the order of the hypotheses formulated above.
Conflict Explanation
Discourse style did not affect the rather objective aspects of conflict explanation (H1): Between conditions, participants did not differ in the degree they thought the debaters referred to different research results, to different effects of PAVLOV or to different goals when using PAVLOV. However, participants reading the incivil panel discussion more strongly assumed the conflict to be personal than those reading a neutral panel discussion (Table 2).
Opinion About PAVLOV
Overall, participants supported using PAVLOV; that is, their mean rating of the program was greater than 50 (M = 62.60, SD = 20.89), t(221) = 8.99, p < 0.001. With regard to hypothesis 2, participants reading the neutral debate (M = 63.04, SD = 20.14) and participants reading the incivil debate (M = 62.20, SD = 21.65) were equally in favor of using the program, t(220) = 0.30, p = 0.766, d = 0.04. Furthermore, regarding hypothesis 3, participants in the neutral condition (M = 72.58, SD = 21.12) and those in the incivil condition (M = 69.48, SD = 23.51) expressed equal confidence in their opinions, t(219.74) = 1.03, p = 0.302, d = 0.14. Having a strong opinion about PAVLOV was associated with more confidence in it, r(220) = 0.40, p < 0.001.
Potency of Science
With regard to hypothesis 4, here was no difference between the neutral (M = 3.09, SD = 1.80) and the incivil condition (M = 2.77, SD = 1.75) regarding the question of whether science is equipped to resolve the conflict about PAVLOV, t(217.66) = 1.37, p = 0.171, d = 0.18.
Practical Benefit of Science
Regarding hypothesis 5, participants who read the incivil debate (M = 4.52, SD = 0.92) did not differ from those who read the neutral debate (M = 4.43, SD = 1.01) in the degree to which they think scientific findings are beneficial in the classroom, t(214.26) = −0.72, p = 0.470, d = −0.10. Contrary to Zeuch and Souvignier (2015), participants studying STEM subjects (M = 4.48, SD = 0.95) did not perceive science to be more beneficial than participants studying non-STEM subjects (M = 4.48, SD = 0.98), t(219.83) = −0.03, p = 0.975, d = 0.00. In our sample, the scale reached a Cronbach’s α of 0.83.
Epistemic Trustworthiness
Regarding hypothesis 6, participants placed more epistemic trust in the debaters when reading a neutral debate: Compared to participants in the incivil condition (M = 4.79, SD = 0.99), participants in the neutral condition (M = 5.06, SD = 1.00) perceived the debaters as having more expertize, t(218.49) = 1.99, p = 0.047, d = 0.27. Furthermore, participants reading the neutral debate (M = 4.76, SD = 1.02) reported higher ratings of debaters’ integrity than those reading the incivil debate (M = 4.05, SD = 1.15), t(219.41) = 4.87, p < 0.001, d = 0.65. Additionally, ratings of benevolence were higher in the neutral condition (M = 4.77, SD = 0.98) than in the incivil condition (M = 4.05, SD = 0.89), t(214.11) = 5.67, p < 0.001, d = 0.76.
In addition, we explored the correlation between the METI subscales and the four conflict explanation items to determine whether the perception of various aspects of a conflict was associated with different degrees of epistemic trust. Those whose explained the conflict by stating that the debaters referred to different research results (item 1) also thought them to have more expertize, r(220) = 0.14, p = 0.039. No relation was found with integrity, r(220) = 0.07, p = 0.321, or benevolence, r(220) = 0.03, p = 0.679. Conflict explanations that assumed personal reasons (item 2) were most strongly related with epistemic trust; in particular, the more participants perceived the conflict to be personal, the less expertize they assigned to the debaters, r(220) = −0.25, p < 0.001. Similarly, the perception of a personal conflict led to decreased ratings of integrity, r(220) = −0.36, p < 0.001, and benevolence, r(220) = −0.41, p < 0.001. The degree to which participants agreed that the debaters referred to different goals of PAVLOV (item 3) did not correlate with any of the METI subscales (expertize: r(220) = 0.10, p = 0.122; integrity: r(220) = −0.00, p = 0.946; benevolence: r(220) = −0.00, p = 0.994). Further, the degree to which participants agreed that the debaters referred to different effects of PAVLOV (item 4) was not associated with epistemic trust either (expertize: r(220) = 0.01, p = 0.863; integrity: r(220) = −0.06, p = 0.348; benevolence: r(220) = −0.05, p = 0.475). Internal consistency of the METI subscales was somewhat lower than initially found by Hendriks et al. (2015), with a Cronbach’s α of 0.87 for expertize, 0.83 for integrity, and 0.76 for benevolence.
Scientists’ Ethos
With regard to hypothesis 7, participants more strongly agreed with the statements that described scientists’ ethos in terms of counter-norms (M = 14.50, SD = 2.36) rather than norms (M = 12.73, SD = 2.29), t(221) = 7.41, p < 0.001, dz = 0.76; a significant negative relationship existed between participants’ agreement with norms and counter-norms, r(220) = −0.17, p = 0.011. Discourse style, however, left the perception of scientists’ ethos largely unaffected. The only difference that emerged was that participants who read the neutral debate compared to the incivil one more strongly thought that educational scientists follow the norm of organized skepticism (Table 3). For the set of norms and counter-norms, we found a Cronbach’s α of 0.41 and 0.58, respectively.
Further Exploratory Analysis: Correlation of METI Subscales
Given the effect that our manipulation of discourse style had on the debaters’ epistemic trustworthiness, we further investigated the METI and its subscales. Specifically, we were interested in the correlations between the different subscales as a function of discourse style, that is whether an incivil discourse style increases or decreases the associations between different aspects of epistemic trustworthiness. For this purpose, we compared the z-standardized correlation coefficients of the subscales between the civil and the incivil condition. Descriptively, the correlations between subscales was weaker in the incivil condition. However, only the correlations involving integrity (i.e. integrity and expertize; integrity and benevolence) were significantly different between the two conditions (Table 4).
Discussion
We examined whether the discourse style of a scientific debate affected participants’ perception of the conflict and the assumed potency of science, the perceived practical relevance of science, participants’ epistemic trust in the debaters, and their perceived scientific ethos. In the following, we will first of all briefly summarize our findings.
With regard to hypothesis 1 on conflict explanation, while participants reading the incivil debate more strongly assumed that debaters’ personal differences caused the conflict, their agreement to interpret the debate as a mere epistemic conflict (e.g., that the debaters referred to different research results) was not affected. Hence, discourse style only differently influenced interpersonal conflict explanations but not epistemic conflict explanations and the personal nature of the conflict did not distract participants from the underlying methodological arguments. In contrast to hypotheses 2 and 3, an incivil discourse style did neither lead to more extreme opinion ratings nor to higher confidence in one’s opinion. Regarding the perceived potency of science, in contrast to hypothesis 4, the discourse style did not differently influence in how far participants perceived science to be equipped to answer the question of the debate (but see our further analyses below). Turning to hypothesis 5, the discourse style had no effect on participants’ willingness to implement findings from educational science into their teaching practice. All in all, participants in our study notably assigned a rather high value to evidence-based teaching practices.
With regard to hypothesis 6 on epistemic truthworthiness, an incivil discourse style led participants to place less epistemic trust in the debaters. Thus, participants rated experts who keep their temper as a more reliable source of knowledge. The effect was most pronounced for the subscales benevolence and integrity, while only a small effect was detectable for scientists’ expertise.
Concerning hypothesis 7 on scientists’ ethos, we found that the discourse style largely did not affect participants’ perception of scientists’ ethos. One exception was the norm of organized skepticism, which participants reading the incivil debate thought that educational scientists fulfill to a lesser degree.
In sum, our findings indicate that only questions regarding the perception of scientists, but not regarding the perception of science as such, were differently tackled by the different discourse styles. For a further discussion of these findings, it is especially interesting to see that the findings for hypothesis 6 are in line with the conflict explanation results (H1): An incivil discourse style mainly affected the rather social components of trustworthiness as opposed to the comparably technical component of expertize. Indeed, expertize does not seem to be a requirement for high ratings of benevolence and integrity: When experts admit flaws, participants perceive them as holding less expertize but ascribe more integrity and benevolence to them (Hendriks et al., 2016). Thus, benevolence and integrity might be the key factors to consider in scientific communication that aims to increase epistemic trustworthiness.
Further, the explanations participants assumed for the conflict were associated with the epistemic trust they placed in the debaters. That is, when participants perceived the conflict to be interpersonal, they also ascribed less epistemic trustworthiness to the debaters. However, when participants perceived the conflict to be caused by debaters referring to different research findings, they also thought the debaters held more expertize. This could mean that when individuals are aware that conflicting evidence is being discussed, they tend to value the experts’ methodological skills. An alternative explanation might be that participants who ascribe more expertize to the debaters are more likely to notice the conflicting research results behind the debate.
In our exploratory analysis, we found that the correlations between the METI subscale “integrity” and the other subscales (“expertize” and “benevolence”) were reduced for participants who read the incivil debate. One interpretation is that participants in the incivil condition might have developed a more nuanced view of the debaters’ epistemic trustworthiness, rating the different components independent of each other. Ratings of integrity and benevolence were more strongly affected by the debaters’ incivil behavior than those of expertize. This supports the idea that epistemic trustworthiness consists of at least partly independent components.
Revisiting the results for hypothesis 7 where the discourse style only affected participants’ perception of scientific ethos with regard to the norm of organized skepticism, we which to elaborate further why reading the incivil debate caused the impression that educational scientists fulfill organized skepticism to a lesser degree. Indeed, an incivil debate can be seen as a deviation from the behavior described in the norm: Scientists should consider all evidence, even if that means questioning themselves. In a personal conflict, however, it might appear that they are questioning the other person rather than carefully checking their own perspective. For all other norms and counter-norms, no differences emerged: Even though incivility affected epistemic trust in the debaters, participants did not generalize to the perception of scientific ethos overall. It is reassuring that a single debate was not sufficient to change participants’ perspectives on a whole research community. However, there is the possibility that repeated experience with a certain type of discourse style can modify how people view scientific ethos. An alternative explanation for why the other norms and counter-norms were not affected is that they were not clearly reflected in the newspaper article. For example, the article offered no information about the personal motivations of the debaters (typically manipulated via information about debaters’ affiliations), which otherwise could have affected the norm of disinterestedness.
Limitations and Implications
In the following, limitations and implications are outlined focusing on 1) the study design and 2) the setting of teacher education and higher education.
(1) A minimal intervention sparing the content of a debate is sufficient to cause participants perceive a conflict differently. From an applied perspective, this instance can be worrying because a third party (e.g., a journalist) might influence the perception of a topic via the descriptions about what is being said, even though he may literally be quoting the experts’ statements. On the other hand, a neutral description of an incivil debate might, in fact, increase the epistemic trust readers place in the experts depicted.
Heated debates in many informal learning settings might impact differently on readers’ evaluations than in our experimental setting. Merely changing subtle descriptions in a newspaper article is less multi-faceted than the discourse style in a real-life debate. For example, in another media format such as video recordings of a debate (e.g., König and Jucks, 2019), one could additionally alter the tone or volume of the experts’ voices. Furthermore, in a real-life incivil debate, it is not only the discourse style that changes, but also argument substance and the way arguments are exchanged are likely to be different. In a conflict, debaters tend to give less consideration to the other’s arguments and do not address them in their replies (Fisher et al., 2017). It might be a more realistic manipulation to additionally vary what debaters are saying, not just how they say it. In such a design, however, it would not have been possible to isolate the effect of discourse style.
Another limitation is the topic of debate and the fact that this heated debate was provided in an area that is not under heated debate in general. Further studies might transfer our experimental manipulation to issues that are under ongoing heated discourse (such as climate change; Hendriks and Jucks, 2020). Especially value-based evaluations on the content of information might impact the evaluation of heated debates and their appropriateness (Kienhues et al., 2020).
(2) Teacher students have at least three roles and tasks when interacting with scientific information and heated debates. First, they are users/readers and simple participants and recipients of scientific discourses. They directly engage with scientific information, e.g., when reflecting upon the role of digitization in school. Second, they (prospectively) teach and play a pivotal role in conveying how scientific conflicts should be dealt with. Though there is evidence that teachers ignore empirical evidence (like that on waiting time in teacher-pupil interactions; Borko et al., 1990), teachers teach how scientific information should be used. In so far, teacher students form a group with specific interests: they are learners in the setting of higher education and trained to be teachers in their former jobs. However, focusing on this specific group in a empirical study provides some limitations: teacher students are more familiar with the topic of education itself than other students in higher education. Hence, our findings might not generalize to scenarios where laypeople are confronted with scientific information in a less academic setting. Furthermore, future studies might expose teachers to a topic less related to school settings, such as a medical debate, and compare their perception of the conflict with that of other laypeople or experts in the field. In a similar fashion, the impact of a debate in the educational sciences on people without expertize in teaching could be examined. Since results from Kardash and Edwards (2012) and Zeuch and Souvignier (2015) indicate that perception of science is altered by professional or educational experience, including experienced teachers in the sample would provide further insights.
Furthermore, the setting of teacher education might reduce a direct immersion into the topic. Though the study used a heated debate, the role of emotions might be stronger in a setting where readers have direct and strong emotions regarding the topic (e.g., flat earthers). Hence, the findings might be limited to an educational setting like the one used in the experiment. Again, taking the perspective of teacher education, training teacher students how to address heated debates and how to support their learners to tear emotional language apart from scientific correctness is important.
An incivil discourse style can negatively affect the epistemic trust placed in scientific debaters. Yet, epistemic trust in scientists is needed for people to perceive them as a reliable source of knowledge. That means that we should encourage neutral debates, especially when they take place in public. On the other hand, teaching science as debate as part of the curriculum in science education could empower students to see past seemingly personal conflicts. Here, teachers are multiplicators of their perspective on science. As such, they need to be able to teach their students how to navigate scientific debates, irrespective of discourse style. Hence, scientific controversies need to be evaluated in light of the scientific progress as such, and they should also be a part of teacher education. At this point, higher education sets the stage for what is needed in society and in science education: The knowledge and insights in how to cope with scientific information and debates. However, teachers should be prepared to confront the paradox of personalized communication, emotional coloring and scientific standards. Hence, they are expected to solve this personally and as part of an educational approach.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. The dataset supporting the conclusions of this article is available at PsychArchives by the following DOI: http://dx.doi.org/10.23668/psycharchives.4483.
Ethics Statement
The study was reviewed and approved by the ethics commission of the Department of Psychology, University of Muenster. The participants provided their written informed consent to participate in this study.
Author Contributions
Study conception and design, JT, DK, RJ, and RB. Acquisition of data, JT. Analysis of data, JT. Interpretation of data, JT, DK, and RB. Drafting of manuscript: JT, DK, and RB. Critical revision: JT, DK, RJ, and RB. All authors contributed to the article and approved the submitted version.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2020.572503/full#supplementary-material.
References
Alexander, P. A. (2018). Past as prologue: educational psychology’s legacy and progeny. J. Educ. Psychol. 110 (2), 147–162. doi:10.1037/edu0000200.
Anderson, M. S., and Louis, K. S. (1994). The graduate student experience and subscription to the norms of science. Res. High. Educ. 35 (3), 273–299. doi:10.1007/BF02496825.
Aust, F., Diedenhofen, B., Ullrich, S., and Musch, J. (2013). Seriousness checks are useful to improve data validity in online research. Behav. Res. Methods 45 (2), 527–535. doi:10.3758/s13428-012-0265-2
Barnes, R. M., Johnston, H. M., MacKenzie, N., Tobin, S. J., and Taglang, C. M. (2018). The effect of ad hominem attacks on the evaluation of claims promoted by scientists. PloS One 13 (1), e0192025. doi:10.1371/journal.pone.0192025
Biesta, G. J. J. (2010). Why ‘what works’ still won’t work: from evidence-based education to value-based education. Stud. Philos. Educ. 29 (5), 491–503. doi:10.1007/s11217-010-9191-x
Borah, P. (2012). Does it matter where you read the news story? Interaction of incivility and news frames in the political blogosphere. Commun. Res. 41 (6), 809–827. doi:10.1177/0093650212449353
Borko, H., Livingston, C., and Shavelson, R. J. (1990). Teachers’ thinking about instruction. Remedial Special Educ. 11 (6), 40–49. doi:10.1177/074193259001100609
Bromme, R., and Goldman, S. R. (2014). The public's bounded understanding of science. Educ. Psychol. 49 (2), 59–69. doi:10.1080/00461520.2014.921572
Bromme, R., Prenzel, M., and Jaeger, M. (2016). Empirische Bildungsforschung und evidenzbasierte Bildungspolitik: Zum Zusammenhang von Wissenschaftskommunikation und Evidenzbasierung in der Bildungsforschung. Z. für Erziehungswiss. (ZfE) 19 (1), 129–146. doi:10.1007/s11618-016-0703-5
Brummernhenrich, B., and Jucks, R. (2019). “Get the shot, now!” Disentangling content-related and social cues in physician–patient communication. Health Psychol. Open 6, 2055102919833057. doi:10.1177/2055102919833057
Carlson, E. A. (2017). Scientific feuds, polemics, and ad hominem arguments in basic and special-interest genetics. Mutat. Res. 771, 128–133. doi:10.1016/j.mrrev.2017.01.003
Chan, K. H., and Yuen, K.-Y. (2020). COVID-19 epidemic: disentangling the re-emerging controversy about medical facemasks from an epidemiological perspective. Int. J. Epidemiol. 49, 1063–1066. doi:10.1093/ije/dyaa044
Chen, Y.-C., Benus, M. J., and Hernandez, J. (2019). Managing uncertainty in scientific argumentation. Sci. Educ. 103 (5), 1235–1276. doi:10.1002/sce.21527
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New Jersey, NJ: L. Erlbaum Associates.
Cummings, L. (2014). The “trust” heuristic: arguments from authority in public health. Health Commun. 29 (1), 1043–1056. doi:10.1080/10410236.2013.831685
Diedenhofen, B., and Musch, J. (2015). cocor: a comprehensive solution for the statistical comparison of correlations. PLoS One 10 (4), e0121945. doi:10.1371/journal.pone.0121945
Douglas, H. (2015). Politics and science: untangling values, ideologies, and reasons. Ann. Am. Soc. Political Soc. Sci. 658 (1), 296–306. doi:10.1177/0002716214557237
Faul, F., Erdfelder, E., Lang, A. G., and Buchner, A. (2007). G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 39 (2), 175–191. doi:10.3758/bf03193146
Fisher, M., Knobe, J., Strickland, B., and Keil, F. C. (2017). The influence of social interaction on intuitions of objectivity and subjectivity. Cognit. Sci. 41 (4), 1119–1134. doi:10.1111/cogs.12380
Fisher, M., Knobe, J., Strickland, B., and Keil, F. C. (2018). The tribalism of truth. Sci. Am. 318 (2), 50–53. doi:10.1038/scientificamerican0218-50
Gierth, L., and Bromme, R. (2020). Attacking science on social media: how user comments affect perceived trustworthiness and credibility. Publ. Understand. Sci. 29 (2), 230–247. doi:10.1177/0963662519889275
Hall, A. (1980). Philosophers at war: the quarrel between Newton and Leibniz. Cambridge, United Kingdom: Cambridge University Press. doi:10.1017/CBO9780511524066.
Hendriks, F., Kienhues, D., and Bromme, R. (2015). Measuring laypeople's trust in experts in a digital age: the Muenster epistemic trustworthiness inventory (METI). PloS One 10 (10), e0139309. doi:10.1371/journal.pone.0139309
Hendriks, F., and Jucks, R. (2020). Does uncertainty in news articles affect readers' trust and decision-making?. Media Commun. 8, 2. doi:10.17645/mac.v8i2.2824
Hendriks, F., and Kienhues, D. (2019). “Science understanding between scientific literacy and trust: contributions of psychological and educational research,” in Handbooks of communication science. Editors A. Leßmöllmann, M. Dascal, and T. Gloning (Berlin, Germany: de Gruyter), 17, 29–50.
Hendriks, F., Kienhues, D., and Bromme, R. (2016). Disclose your flaws! Admission positively affects the perceived trustworthiness of an expert science blogger. Stud. Commun. Sci. 16 (2), 124–131. doi:10.1016/j.scoms.2016.10.003
Hofer, B. K. (2000). Dimensionality and disciplinary difference in personal epistemology. Contemp. Educ. Psychol. 25 (4), 378–408. doi:10.1006/ceps.1999.1026
Jenkins, E. W. (1999). School science, citizenship and the public understanding of science. Int. J. Sci. Educ. 21 (7), 703–710. doi:10.1080/095006999290363
Jennings, F. J., and Russell, F. M. (2019). Civility, credibility, and health information: the impact of uncivil comments and source credibility on attitudes about vaccines. Publ. Understand. Sci. 28 (4), 417–432. doi:10.1177/0963662519837901
Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: effects of hedging on scientists’ and journalists’ credibility. Hum. Commun. Res. 34 (3), 347–369. doi:10.1111/j.1468-2958.2008.00324.x
Kardash, C. M., and Edwards, O. V. (2012). Thinking and behaving like scientists: perceptions of undergraduate science interns and their faculty mentors. Instr. Sci. 40 (6), 875–899. doi:10.1007/s11251-011-9195-0
Kienhues, D., Jucks, R., and Bromme, R. (2020). Sealing the gateways for post-truthism: reestablishing the epistemic authority of science. Educ. Psychol. 55 (3), 144–154. doi:10.1080/00461520.2020.1784012
Kolstø, S. D. (2001). Scientific literacy for citizenship: tools for dealing with the science dimension of controversial socioscientific issues. Sci. Educ. 85, 291–310. doi:10.1002/sce.1011
König, L., and Jucks, R. (2019). When do information seekers trust scientific information? Insights from recipients’ evaluations of online video lectures. Int. J. Educ. Technol. Higher Edu. 16 (1), 1. doi:10.1186/s41239-019-0132-7
Lakatos, I., and Musgrave, A. (1970). Criticism and the growth of knowledge. Cambridge, United Kingdom: Cambridge University Press.
Leitão, S. (2000). The potential of argument in knowledge building. Hum. Dev. 43 (6), 332–360. doi:10.1159/000022695
Lilienfeld, S. O. (2012). Public skepticism of psychology: why many people perceive the study of human behavior as unscientific. Am. Psychol. 67 (2), 111–129. doi:10.1037/a0023963
Lobato, E. J. C., and Zimmerman, C. (2019). Examining how people reason about controversial scientific topics. Think. Reas. 25 (2), 231–255. doi:10.1080/13546783.2018.1521870
Lonka, K., Ketonen, E., and Vermunt, J. D. (2020). University students' epistemic profiles, conceptions of learning, and academic performance. Higher education. doi:10.1007/s10734-020-00575-6
McKee, M., and Diethelm, P. (2010). How the growth of denialism undermines public health. BMJ 341, c6950. doi:10.1136/bmj.c6950
Medlin, M. M., Sacco, D. F., and Brown, M. (2019). Political orientation and belief in science in a U.S. college sample. Psychol. Rep. 123 (5), 1688–1702. doi:10.1177/0033294119889583
Merk, S., Rosman, T., Rueß, J., Syring, M., and Schneider, J. (2017). Pre-service teachers' perceived value of general pedagogical knowledge for practice: relations with epistemic beliefs and source beliefs. PloS One 12 (9), e0184971. doi:10.1371/journal.pone.0184971
Mitroff, I. I. (1974). Norms and counter-norms in a select group of the apollo moon scientists: a case study of the ambivalence of scientists. Am. Socio. Rev. 39 (4), 579–595. doi:10.2307/2094423
Munro, G. D. (2010). The scientific impotence excuse: discounting belief-threatening scientific abstracts. J. Appl. Soc. Psychol. 40 (3), 579–600. doi:10.1111/j.1559-1816.2010.00588.x
Murphy, P. K. (2015). Marking the way: school-based interventions that “work”. Contemp. Educ. Psychol. 40, 1–4. doi:10.1016/j.cedpsych.2014.10.003
Mutz, D. C., and Reeves, B. (2005). The new videomalaise: effects of televised incivility on political trust. Am. Polit. Sci. Rev. 99 (1), 1–15. doi:10.1017/S0003055405051452
Nau, C., and Stewart, C. O. (2013). Effects of verbal aggression and party identification bias on perceptions of political speakers. J. Lang. Soc. Psychol. 33 (5), 526–536. doi:10.1177/0261927X13512486
Oreskes, N., and Conway, E. M. (2010). Merchants of doubt: how a handful of scientists obscured the truth on issues from tobacco smoke to global warming. London, United Kingdom: Bloomsbury Press.
Osborne, J. (2010). Arguing to learn in science: the role of collaborative, critical discourse. Science 328 (5977), 463–466. doi:10.1126/science.1183944
Paletz, S. B. F., Chan, J., and Schunn, C. D. (2016). Uncovering uncertainty through disagreement. Appl. Cognit. Psychol. 30 (3), 387–400. 10.1002/acp.3213
Peters, H.-P. (2013). Gap between science and media revisited: scientists as public communicators. Proc. Natl. Acad. Sci. U.S.A. 110 (3), 14102–14109. doi:10.1073/pnas.1212745110
Popan, J. R., Coursey, L., Acosta, J., and Kenworthy, J. (2019). Testing the effects of incivility during internet political discussion on perceptions of rational argument and evaluations of a political outgroup. Comput. Hum. Behav. 96, 123–132. doi:10.1016/j.chb.2019.02.017
Questback GmbH (2018). EFS survey (fall 2018). Cologne, Germany: Questback GmbHAvailable at: https://www.unipark.com/.
R Core Team (2018). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical ComputingAvailable at: https://www.R-project.org/.
National Research Council (2001). Scientific research in education. Washington D.C, United States: The National Academies Press.
Rowe, I. (2015). Civility 2.0: a comparative analysis of incivility in online political discussion. Inf. Commun. Soc. 18 (2), 121–138. doi:10.1080/1369118X.2014.940365
Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educ. Res. 31 (7), 15–21. doi:10.3102/0013189X031007015
Sodhi, M., and Etminan, M. (2020). Safety of ibuprofen in patients with COVID-19. Chest 158 (1), 55–56. doi:10.1016/j.chest.2020.03.040
Tversky, A., and Shafir, E. (1992). The disjunction effect in choice under uncertainty. Psychol. Sci. 3 (5), 305–310. doi:10.1111/j.1467-9280.1992.tb00678.x
Zeuch, N., and Souvignier, E. (2015). Measurement of scientific thinking of pre-service teachers—development of a new instrument and identification of latent profiles. Beltz Juventa 43 (3), 245–262.
Zlatkin-Troitschanskaia, O. (2016). Evidence-based actions within the multilevel system of schools – requirements, processes, and effects (EviS). J. Educati. Res. Online 8 (3), 5–13. Available at: http://www.j-e-r-o.com/index.php/jero/article/view/701.
Keywords: scientific debate, unterstanding controversies, epistemic trust, discourse style, scientists' ethos
Citation: Tkotz J, Kienhues D, Jucks R and Bromme R (2021) Keep Calm in Heated Debates: How People Perceive Different Styles of Discourse in a Scientific Debate. Front. Educ. 5:572503. doi: 10.3389/feduc.2020.572503
Received: 14 June 2020; Accepted: 21 December 2020;
Published: 11 February 2021.
Edited by:
Olga Zlatkin-Troitschanskaia, Johannes Gutenberg University Mainz, GermanyReviewed by:
Susan R Goldman, University of Illinois at Chicago, United StatesYing-Chih Chen, Arizona State University, United States
Copyright © 2021 Tkotz, Kienhues, Jucks and Bromme. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Dorothe Kienhues, a2llbmh1ZXNAdW5pLW11ZW5zdGVyLmRl