Skip to main content

PERSPECTIVE article

Front. Commun., 02 February 2017
Sec. Science and Environmental Communication

(When) Is Science Reporting Ethical? The Case for Recognizing Shared Epistemic Responsibility in Science Journalism

  • Philosophy and Interdisciplinary Graduate Program in Neuroscience, University of Iowa, Iowa City, IA, USA

Internal mechanisms that uphold the reliability of published scientific results have failed across many sciences, including some that are major sources of science news. Traditional methods for reporting science in the mass media do not effectively compensate for this unreliability. I argue for a new conceptual framework in which science journalists and scientists form a complex knowledge community, with science news as the interdisciplinary product. This approach motivates forms of collaboration and training that can improve the epistemic reliability of science news.

In a panel discussion of the problem of communicating uncertain scientific results to the public, former New York Times science reporter and editor Phillip Boffey remarks:

One of the problems in journalism is to try to find what is really going on, what is accurate and which sources to trust. What makes it slightly easier in the science arena than in others is the mechanisms that are designed to both produce consensus and reduce uncertainty in science. A peer-reviewed journal gives reporters more confidence than an unreviewed source because at least someone who knew something about the subject looked at the paper. (Friedman et al., 1999)

Boffey adds that uncertainty is a smaller problem for science reporters than for reporters covering other types of news “because of the scientific tradition of replicating or refuting studies and findings. It is a professional obligation for researchers to discuss the uncertainty of their findings.”

Boffey’s remarks reveal a traditional science journalist’s trust that the mechanisms of scientific research are generally reliable—that, within allowable limits for error, research is being properly conducted, peer-reviewed journals reliably publish only papers that have met high epistemic standards, and consensus opinion is a reliable indicator of which hypotheses are most highly confirmed by the evidence. Unfortunately, this trust is no longer clearly justified in many fields of scientific research. It is estimated that as many as half the articles in peer-reviewed journals across a wide variety of fields, from biomedical research to social psychology, report results that are probably false (Ioannidis, 2005; Simmons et al., 2011; Ioannidis et al., 2014). If so, then journalists possess good reasons to believe that scientists in affected fields have not conducted the inquiry necessary to have evidence for the reported result (Hardwig, 1985). In terms of the ethics of inquiry—the epistemic norms governing when belief is warranted—it may be epistemically unvirtuous for a journalist to believe published results in these fields, and in consequence to report them—at least not without a disclaimer to the effect that the entire field is epistemically unreliable.1

This raises a critical issue: when a science journalist cannot take the proper conduct of a science for granted, what does this imply for the epistemically responsible conduct of her job? Epistemic responsibility is already a formal norm of the profession: for example, the U.S. Society for Professional Journalists’ Code of Ethics (Society for Professional Journalists, 2014) states that journalists should “take responsibility for the accuracy of their work” and “verify information before releasing it.” The question is how science journalists can satisfy these epistemic norms when belief that scientists are following their own ethics of inquiry is unwarranted. The issue generalizes to any news based on raw or analyzed data produced by third parties—an increasingly important sector of journalism, and arguably the future of the profession (Nguyen and Lugo-Ocando, 2016).2 Even more generally: what should non-experts do when trust in the experts is known to be misplaced yet continued epistemic dependence on them is unavoidable?

In what follows, a science journalist is anyone who reports regularly on science stories, although my main concern is with those writing about research for mass media—i.e., “big-Journalism” (Borenstein, cited in Brainard, 2009).3 Although a few relevant national differences (noted below) arise, the epistemic concerns raised here are general, and science journalists, like scientists, are increasingly internationally networked (Russell, 2009). The category includes full-time science journalists (including dedicated freelancers), reporters who are assigned (or take freelance assignments) on regular occasions to report on science, and some citizen journalists (including scientists presenting their or others’ research results for big-Journalism outlets—e.g., Iacoboni et al., 2007). Science journalism is understood narrowly as including stories about research, in contrast to a broad sense in which science is linked to non-science stories by way of explaining an aspect of that event, such as a natural disaster (Summ and Volpers, 2015).

Epistemic Failures in Science

The Big Three types of research misconduct—fabrication, falsification, and plagiarism—are clear failures to meet science’s epistemic responsibility for results published in peer-reviewed journals. They are condemned by virtually all national policies on research misconduct (Altman, 2006; Resnik et al., 2015; U.S. Federal Policy on Research Misconduct) and are easy to comprehend, criticize, and convey to the lay public. However, they are peripheral to the current epistemic problem.

The sources of current failures are subtle, hard for non-scientists to grasp, and morally ambiguous—hence the general label “questionable research practices,” or QRPs (Gardner et al., 2005; Martinson et al., 2005; John et al., 2011; Fang et al., 2012; Ioannidis et al., 2014).4 Although the “replication crisis” in psychology may be most widely known, it is a tip of the QRP iceberg.5 In some cases, non-replicability (or non-reproducibility) can be traced to QRPs (Begley, 2013). But results that are not replicable may not have involved QRPs; there may be no attempted replications of results in which QRPs may or may not have occurred; and studies that are QRP-free may not be submitted for publication at all if the results are negative or mixed.

The general context is one of the failures in mechanisms for producing and publishing research results in a reliable way. Failures in research include various ways of nudging results of a study in a desired direction. Failures in publication include practices that result in epistemically damaging distortions in a field’s publication record. Some QRPs are common enough to have labels. “p-value fishing” or “p-value hacking” includes analyzing data in various ways in order to obtain a result that falls within the p ≤ 0.05 threshold for statistical significance, the conventional level at which one can reject the null hypothesis. “Adaptive sampling” involves stopping data collection prematurely once a wanted result is obtained, or continuing to collect data if results are nearly, but not quite, statistically significant; sometimes data are excluded after looking at its impact on statistical significance. Sometimes dependent measures that do not reach statistical significance are not reported in order to eliminate “imperfect” results, which are difficult to publish. The “file drawer effect” is when studies that have negative or non-confirmatory results are not submitted for review, given that peer-review journals prefer high-impact results as well as new studies (non-replications). This yields systematic bias in the pool of papers submitted for review and in the publication record (Easterbrook et al., 1991). There is also lack of thorough peer review of submissions (Begley, 2013; Alberts et al., 2014).

These and other technical—to outsiders, mysterious—misbehaviors “present greater threats to the scientific enterprise than those caused by high-profile misconduct cases such as fraud” (Martinson et al., 2005). QRPs may or may not be a slippery slope to fraud in individual cases (Crocker, 2011), but they can make a field’s entire publication record unreliable. John et al. (2011) estimated that in psychology the actual prevalence of various adaptive sampling techniques was 100% based on high rates (62–72%) of self-admitted cases. Kühlberger et al. (2014) found evidence of a cumulative effect of QRPs: in their sample of 1,000 published papers across psychology, about three times as many studies just reached the p ≤ 0.05 threshold than just failed to reach it.

Despite the widespread epistemic damage they can cause, QRPs are much harder to eliminate than the Big Three. A field’s publication record cannot be quarantined via retraction the way selective publications by a single researcher can be. QRPs are also morally, legally, and professionally ambiguous. The Committee on Publication Ethics Code of Conduct (Committee on Publication Ethics, 2011) for best peer-reviewed journal editor practices lists “ensuring the integrity of the academic record” in its formal code; the associated recommendation states that “errors, inaccurate or misleading statements must be corrected promptly and with due prominence.” QRPs are not clear cases of any of these. At best, they “lie somewhere on a continuum between scientific fraud, bias, and simple carelessness,” and while many scientists admit to “alteration” or “modification” of data, they also do not think this amounts to falsification (Fanelli, 2009). To date, QRPs are treated as “inconsequential” rather than “career-ending” sins (Simmons et al., 2011); “there is no real consequence for investigators or journals” for engaging in them (Begley, 2013).

Note that QRPs are not limited to psychology. Martinson et al.’s (Martinson et al., 2005) misconduct survey was of researchers who had received funding from the U.S. National Institutes of Health; Ioannidis et al. (2014) report evidence of QRPs in neuroimaging and preclinical studies, including animal studies; and Fanelli (2009) report that misconduct was more frequently reported in their survey by medical/pharmacological researchers than others. However, even if the current crisis were limited to medical research and psychology, these areas are major sources of science news.

The concept of a conflict of interest, in both research and media ethics, is usually narrowly defined in terms of possible personal gain or loss avoidance. In today’s scientific climate, the primary conflict of interest is between the pursuit of publishability and the pursuit of truth (Nosek et al., 2012). There is an analogous conflict of interest in journalism whenever the ethics of inquiry take a back seat to the imperative of public appeal. This conflict is exacerbated when journalists are unable to satisfy the epistemic norms of their profession, as discussed in the next section.

Epistemic Vulnerability in Journalism

Many journalists take, or frequently have no choice but to take, a stance toward science characteristic of a member of a lay community, which lacks the expertise to assess results and is not involved in the research yielding those results (Grasswick, 2010). Laypersons judge what to believe by judging whom to believe (Anderson, 2011:145). In this case, scientists have specialized knowledge, and the lay stance in science journalism is a response to this knowledge deficit. It is a perspective from which science happens in a separate box (Lief, 2015). It induces a sharp asymmetry in perceived epistemic responsibility between scientists and journalists regarding the production of reliable science news.

The lay stance is institutionalized and reinforced by the practices of assigning reporters without scientific knowledge to science stories or assigning those with background in one science to report on another. In the U.S., if not in all countries, budget and staff cuts in mass media (Brainard, 2009; Murcott, 2009; Nature, 2009; Russell, 2009) entail that big-Journalism science reporting from the lay stance is the rule, not the exception. Institutions that do not have the public interest at heart can step into the breach (e.g., Göpfort, 2007, in Germany). Although general reporting experience is more highly correlated with reliable science stories than training in a science (Wilson, 2000), it is unlikely that only experienced reporters are being assigned science stories.

There is ample pragmatic justification for these assignment practices. As one British science news editor put it:

If a guy’s got a paper in Nature that’s been subject to peer review I have absolutely no qualms about quoting everything he says in full and being unquestioning. That sounds awful in a way but we’re a high speed operation, you know. (Hansen, 1994)

But the lay stance is not essentially a pragmatic response to time pressure. It is rooted in the historical fact that science reporting initiated (in the U.S.) as a cheerleading enterprise that involved translating science for lay audiences in order to win public appreciation for the benefits science provides to society (Lewenstein, 1992; Brainard, 2008; Rensberger, 2009). However, while science’s high socioepistemic status has been tarnished by greater recognition and reporting of the bad consequences of science, there has been little change in journalists’ inability to recognize bad science. While science has become more quantitative—adding mathematical and computational modeling to statistics and other traditional mathematical tools and reasoning methods (e.g., Castellano et al., 2009; Baronchelli et al., 2013)—many journalists still cannot do the math (see below). As long as the lay stance persists, it is misleading to characterize this journalistic shift as one from “science lapdog to public watchdog” (Rensberger, quoted in Brainard, 2008).

The epistemic helplessness of the lay stance becomes obviously problematic when we have good reason to be skeptical about general research and publication practices in a field. Engber (2016) notes “We’d like to think that a published study has more than even odds of being true.” From the lay stance, the reporter is unable to improve upon these odds. The actual reliability of a particular result is not affected by quoting other scientists. The public still may as well toss a coin to determine its credibility. Compounding this with the journalistic bias for novelty, it is more likely than ever that the public will be exposed to unreliable science news (Dunwoody, 2008/2014; American Association for the Advancement of Science and Center for Public Engagement with Science and Technology, 2015). Even in a field with a reliable publication record, an individual study flawed by QRPs is a problem for science news.

The obvious response is that since the scientists possess the knowledge, they are responsible for the epistemic integrity of science news. This position is unjustified. Suppose the science journalist sees her epistemic responsibility as limited to not inserting noise into the communication channel between scientists and the public. Without someone to do epistemic triage in an affected field—a function that properly conducted science makes moot—even a noiseless transmission channel will spew reliable and unreliable research alike into the public sphere. Epistemic norms would counsel not covering the affected fields until researchers get their act together. Suppose the journalist sees her professional role as more than being a pipeline between scientists and the public. She must still avoid epistemically misleading framing choices and other mediations. The lay stance makes it a matter of luck or intuition whether her choices are epistemically virtuous ones.

Current norms of objective journalism or the “journalism of verification” (Kovach and Rosensteil, 2001) sidestep journalistic epistemic responsibility. The objective journalist strives for verifiable facts, accurate reporting of events, impartial reporting and writing, and a detached, impersonal point of view. Practices associated with this norm—modeled on the scientific method (Nelkin, 1987; Schudson, 2001)—include using neutral language (detachment), fairly representing positions and actors (impartiality), and getting both sides of the story (balance) (Schudson, 1978, 2001; Mindich, 1998; Schudson and Anderson, 2009; Vos, 2011). But these practices guide descriptions of people and their words and actions. They presuppose the ability to verify facts and ensure accuracy, which the science journalist reporting from the lay stance does not have. Trust is essential to journalism, but not blind trust. In this case, “trust but verify” becomes “trust, and trust some more.”

When divorced from the ability to verify, these practices can wreak havoc on the epistemic status of science news. When reporters cannot ascertain the actual level of credibility of research, they may be more inclined to frame a science story in ways that raise or lower its appearance of credibility instead (Dixon and Clarke, 2013). They may be more likely to use epistemically damaging metaphors, such as by drawing unjustified analogies to human stereotypes (e.g., Hoffman, 2016). They may use balance (or competing voices) without considering that this practice transforms any degree of uncertainty about the facts into a difference of opinion about the facts. This is epistemically catastrophic on the science beat. We expect scientific opinions to be based on the best available evidence, but uncertainty framed in terms of differences of opinion suggests all scientific opinions are equally valid and evidence is secondary or irrelevant. The practice also lowers the credibility of science journalists (Jensen, 2008 for cancer news) and empowers industry to manipulate public opinion and sow doubt (Antilla, 2005 and Wilson, 2000 for climate science).

Problems in climate science communication illustrate how standard journalistic practices can transform reliable science into unreliable science news. Nearly unanimous scientific consensus on anthropogenic causes of climate change (if not on specific consequences) enabled environmental reporters to do their jobs much as Boffey did decades ago. But journalistic norms of balance have contributed to public confusion (Boykoff and Boykoff, 2004; Antilla, 2005), while social psychological research has shown the inadequacy of the “deficit model” of science communication, according to which science illiteracy is the reason behind public opposition to scientific findings, new technology, and environmental action (Kahan et al., 2011 in the U.S., Besley and Nisbet, 2011 for a more general view in the U.S. and U.K.). If climate science turned out to be riddled with QRPs, we would be back to the start of this paper.

As with QRPs in science, these practices also have pragmatic justifications. Usual pressures of time and audience appeal have been ramped up into a hypercompetitive reporting atmosphere that incentivizes science “infotainment” (Rehmann, 2013). The quantity of science available on the Internet promotes reliance on press releases from science journals or research institutions and homogeneity in coverage [a loss of “infodiversity” (Granado, 2011: 810)]. Nevertheless, as Nature editors (Nature, 2009) remark, “society needs to see science scrutinized as well as regurgitated if it is to give science its trust, and journalists are an important part of that process.” Scientists are asking for effective watchdogs. But short of becoming scientists, how can journalists shoulder their epistemic responsibility?

What is to be Done?

In complex epistemic communities, no one individual has the expertise to verify a result and epistemic virtue depends on the efforts of the group. Interdisciplinary scientific collaborations are a primary example of such communities. Science journalists and scientists are another, with science news the interdisciplinary product. Each group is highly motivated by practical pressures: the scientific equivalent of “What does this mean for the everyday person?” is “What does this mean for a follow-up study and another grant?” Each group individually can undermine the reliability of science news by engaging in profession-specific QRPs. They need to work together to ensure an epistemically virtuous product. Specific steps might include the following.

“Consensus Conferences” for Guiding Science News

In consensus conferences in biomedical research, stakeholders—including experts and members of the public—meet over several days in a structured setting to review available evidence, identify research gaps, and help guide public policy on specific health problems.6 Consumers of big-Journalism science news deserve the same care, also regarding specific topics (e.g., climate science, vaccines, etc.). Scientist and journalist professional ethics organizations (e.g., Committee on Publication Ethics, Society of Professional Journalists), representatives of niche and mass media (and in the U.S., across the political spectrum), and science communication professionals (e.g., J-School Faculty, Society of Environmental Journalists, National Association of Science Writers), and other stakeholders can similarly develop reviews of research and media coverage to guide reliable science news. They can also create best practice guidelines regarding general issues such as control of narrative structure, advisability of prepublication story review, responsible use of metaphor, framing, headlines, and other rhetorical features, and ways to deal with distinct sources of uncertainty (e.g., lack of clear consensus, a biased publication record in a field, prevalence of QRPs in a field, suspected QRPs in a study).

In this context, scientists and journalists are both experts and must integrate their expertise for better public outcomes. Scientists are open to such collaborative efforts (Illes et al., 2010; American Association for the Advancement of Science and Center for Public Engagement with Science and Technology, 2015; Irion, 2015). One model of this sort of interdisciplinary collaboration might be “negotiation among experts” (Rossini and Porter, 1979; Andersen and Wagenknecht, 2013) or Thagard’s (1997, 2006) “peer-different” collaborations. In these models, expertise remains divided among group members and the results of subtasks are integrated across boundaries by negotiation. Notably, this process requires “interactional expertise”: enough expertise regarding basic categories and concepts to communicate across disciplines, but not the “contributory expertise” that requires the knowledge and skills to perform experiments or develop theory (Petrie, 1976; Collins, 2007; Ribeiro, 2007; Andersen and Wagenknecht, 2013; Collins and Evans, 2015).

It would also mean professional cultural shifts (Reed, 2001). For scientists, this would mean shifting from distrust of journalists and ignorance of what journalists do to an appreciation of journalistic expertise (Utts, 2010; Besley and Nisbet, 2011; McConway, 2016). For journalists, this would mean overcoming fears that collaboration with sources entails a loss of control and power over the news (Davis, 2009) and that understanding science makes a reporter less capable of reporting it (Nelkin, 1987; Dunwoody, 2008/2014; Wilson, 2000; Nguyen and Lugo-Ocando, 2016).

Putting the “Translation” into “Translational Implications”

In the U.S., public science funding agencies often require applicants to explain the public impact of the proposed research, but they do not require collaboration with professional science journalists to communicate these implications to the public. This requirement can become an effective tool for improved science news if publically funded grants must include a component whereby independent professional journalists are included in teams that turn research results into science news. This would be interdisciplinary interaction at a team rather than profession level.

Innumeracy Should Be the New Illiteracy

Effective collaboration at the profession and team levels requires journalists to be functionally bilingual between “Sciencese” and “Publicese,” that portion of a natural language in which such terms as “ice floe” and “symmetry” are too technical to be included (Schneider, 2008). In Schneider’s (Schneider, 2008) illuminating ethnographic study of a Society of Environmental Journalists’ “science immersion” workshop in climate science, the most highly praised exercise involved reading peer-reviewed science papers together. But the groups read the introduction and discussion sections. In practice, scientists often just read the methods and result sections, because that’s where the data are.

Calls for greater numeracy are not new (e.g., Curtin and Maier, 2001), but they have not been widely implemented in education or treated consistently or urgently (Wilson, 2000; Berret and Phillips, 2016; Griffin and Dunwoody, 2016; Nguyen and Lugo-Ocando, 2016; for basic coding, see Doherty, 2012). There are multiple reasons for slow change. For example, if humanities pipelines into journalism majors and J-schools are filled with math averse students, leading to fears of low enrollments (Hewett, 2016), the solution may be recruiting students from science, technology, engineering, and mathematics fields as double majors or graduate students.

Perhaps the epistemic vulnerability revealed by the problems in science will be the tipping point for greater numeracy. Prior motivations include avoiding manipulation by interest groups, promoting journalistic autonomy, giving professionals an edge over citizen journalists, and empowering them to think more creatively about science news. Numeracy would enable journalists to shift from the blind trust characteristic of the lay stance to a position of justified credence (Hardwig, 1991; Goldman, 2001).

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

  1. ^The ethics of belief concerns the conditions under which we should believe something—for example, only when we have sufficient evidence, or else when evidence is not sufficient but it is prudential to believe (Chignell, 2016 provides an overview).
  2. ^While I focus on numerically based research, the problem generalizes to subdisciplines that focus on sociocultural concerns and contexts, insofar as this research too can be done questionably but without clearly violating established standards of research misconduct (discussed below). See Bender et al. (2010) and Rose and Abi-Rached (2013) for recent attempts to encourage collaboration between different research traditions (in anthropology and in the mind sciences, respectively).
  3. ^Most Americans get science news from mainstream print and broadcast media (Dunwoody, 2008/2014; American Association for the Advancement of Science, 2015). Many science journalists in other countries also work for mass media outlets (Russell, 2009), and it is likely their audiences mainly get their science news from these outlets too.
  4. ^Neuroskeptic (2010) put falsification in the ninth circle of “scientific hell”; the other eight circles included mainly QRPs. One commenter responded: “I miss a circle for shoddy/sensationalist science journalism. Or do they get a hell all for themselves?”
  5. ^Online discussion includes PsychFileDrawer (Available at: http://www.psychfiledrawer.org/TheFiledrawerProblem.php); PubPeer.com; Retractionwatch.com; Open Science Collaboration (2015); F1000Research.com; Cortex Registered Report; Many Labs Replication Project and Reproducibility Project of the Center for Open Science; FlexibleMeasures.com; Badscience.net. Other discussion in print include Fanelli (2010); Lehrer (2010); Schooler (2011, 2014); Nosek et al. (2012); Flier (2016); Bohannon (2015); Etz and Vanderkerckhove (2016). See Cumming (2012, 2015) on reforming statistical practices.
  6. ^Consensus conferences are just one of the various integration methods (McDonald et al., 2009). They were initiated in Denmark and adopted by various groups internationally (e.g., U.S. National Institutes for Health, https://consensus.nih.gov, Americas Health Foundation, http://www.the-ahf.org/consensus-conferences).

References

Alberts, B., Kirschner, M., Tilghman, S., and Varmus, H. (2014). Rescuing US biomedical research from its systemic flaws. Proc. Natl. Acad. Sci. U.S.A. 111, 5773–5777. doi: 10.1073/pnas.1404402111

PubMed Abstract | CrossRef Full Text | Google Scholar

Altman, L. (2006). For Science’s Gatekeepers, A Credibility Gap. The New York Times Health Section. Available at: http://www.nytimes.com/2006/05/02/health/02docs.html

Google Scholar

American Association for the Advancement of Science, Center for Public Engagement with Science and Technology. (2015). Why Is Working with the Media Useful? Available at: http://www.aaas.org/pes/why-working-media-useful

Google Scholar

Andersen, H., and Wagenknecht, S. (2013). Epistemic dependence in interdisciplinary groups. Synthese 190, 1881–1898. doi:10.1007/s11229-012-0172-1

CrossRef Full Text | Google Scholar

Anderson, E. (2011). Democracy, public policy, and lay assessments of scientific testimony. Episteme 8, 144–164. doi:10.3366/epi.2011.0013

CrossRef Full Text | Google Scholar

Antilla, L. (2005). Climate of Skepticism: US newspaper coverage of the science of climate change. Global Environ. Change 15, 338–352. doi:10.1016/j.gloenvcha.2005.08.003

CrossRef Full Text | Google Scholar

Baronchelli, A., Ferrer-i-Cancho, R., Pastor-Satorras, R., Chater, N., and Christiansen, M. (2013). Networks in cognitive science. Trends Cogn. Sci. 17, 348–360. doi:10.1016/j.tics.2013.04.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Begley, C. G. (2013). Reproducibility: six red flags for suspect work. Nature 497, 433–435. doi:10.1038/497433a

CrossRef Full Text | Google Scholar

Bender, A., Hutchins, E., and Medin, D. (2010). Anthropology in cognitive science. Top. Cogn. Sci. 2, 374–385. doi:10.1111/j.1756-8765.2010.01082.x

CrossRef Full Text | Google Scholar

Berret, C., and Phillips, C. (2016). A crucial skill that most J-schools aren’t teaching. Columbia J. Rev. Available at: http://www.cjr.org/analysis/data.php

Google Scholar

Besley, J., and Nisbet, M. (2011). How scientists view the public, the media and the political process. Public Und. Sci. 22, 644–659.

Google Scholar

Bohannon, J. (2015). Many psychology papers fail replication test. Science 349, 910–911. doi:10.1126/science.349.6251.910

CrossRef Full Text | Google Scholar

Boykoff, M., and Boykoff, J. (2004). Balance as bias: global warming in the US prestige press. Global Environ. Change 14, 125–136. doi:10.1016/j.gloenvcha.2003.10.001

CrossRef Full Text | Google Scholar

Brainard, C. (2008). Science journalism: past, present, and futuristic. Columbia J. Rev. Available at: http://www.cjr.org/the_observatory/science_journalism.php

Google Scholar

Brainard, C. (2009). Science journalism’s hope and despair: ‘niche’ pubs growing as MSM circles the drain. Columbia J. Rev. Available at: http://www.cjr.org/the_observatory/science_journalisms_hope_and_d.php

Google Scholar

Castellano, C., Fortunato, S., and Loreto, V. (2009). Statistical physics of social dynamics. Rev. Mod. Phys. 81, 591–646.

Google Scholar

Chignell, A. (2016). “The ethics of belief,” in Stanford Encyclopedia of Philosophy, 2016 Edn, ed. E. Zalta. Available at: http://plato.stanford.edu/entries/ethics-belief/

Google Scholar

Collins, H. (2007). A new programme of research? Stud. Hist. Philos. Sci. A 38, 615–620. doi:10.1016/j.shpsa.2007.09.004

CrossRef Full Text | Google Scholar

Collins, H., and Evans, R. (2015). Expertise revisited, part I – interactional expertise. Stud. Hist. Philos. Sci. A 54, 113–123. doi:10.1016/j.shpsa.2015.07.004

CrossRef Full Text | Google Scholar

Committee on Publication Ethics. (2011). Code of Conduct and Best Practice Guidelines for Journal Editors. Available at: http://publicationethics.org/files/Code%20of%20Conduct_2.pdf

Google Scholar

Crocker, J. (2011). The road to fraud starts with a single step. Nature 479, 151. doi:10.1038/479151a

CrossRef Full Text | Google Scholar

Cumming, G. (2012). Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-analysis. New York: Routledge.

Google Scholar

Cumming, G. (2015). The new statistics: estimation and research integrity. Video Workshop Posted at the Association for Psychological Science’s. Available at: http://www.psychologicalscience.org/index.php/members/new-statistics

Google Scholar

Curtin, P., and Maier, S. (2001). Numbers in the newsroom: a qualitative examination of a quantitative challenge. J. Mass Commun. Q. 78, 720–738.

Google Scholar

Davis, A. (2009). Journalist-source relations, mediated reflexivity and the politics of politics. Journalism Stud. 10, 204–219. doi:10.1080/14616700802580540

CrossRef Full Text | Google Scholar

Dixon, G., and Clarke, C. (2013). Heightening uncertainty around certain science: media coverage, false balance, and the autism-vaccine controversy. Sci. Commun. 35, 358–382. doi:10.1177/1075547012458290

CrossRef Full Text | Google Scholar

Doherty, S. (2012). Will the geeks inherit the newsroom? Reflections on why journalists should learn computer science. Int. J. Technol. Knowl. Soc. 8, 111–121. doi:10.18848/1832-3669/CGP/v08i02/56259

CrossRef Full Text | Google Scholar

Dunwoody, S. (2008/2014). “Science journalism: prospects in the digital age,” in Routledge Handbook of Public Communication of Science and Technology, eds M. Bucchi and B. Trench (London: Routledge), 27–39.

Google Scholar

Easterbrook, P., Berlin, J., Gopalan, R., and Matthews, D. (1991). Publication bias in clinical research. Lancet 337, 867–872. doi:10.1016/0140-6736(91)90201-Y

PubMed Abstract | CrossRef Full Text | Google Scholar

Etz, A., and Vanderkerckhove, J. (2016). A Bayesian perspective on the reproducibility project in psychology. PLoS ONE 11:e0149794. doi:10.1371/journal.pone.0149794

CrossRef Full Text | Google Scholar

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE 4:e5738. doi:10.1371/journal.pone.0005738

PubMed Abstract | CrossRef Full Text | Google Scholar

Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PLoS ONE 5:e10068. doi:10.1371/journal.pone.0010068

CrossRef Full Text | Google Scholar

Fang, F., Steen, R. G., and Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proc. Natl. Acad. Sci. U.S.A. 109, 17028–17033. doi:10.1073/pnas.1212247109

PubMed Abstract | CrossRef Full Text | Google Scholar

Flier, J. (2016). How to keep bad science from getting into print. Wall St. J. A17. (accessed March 1, 2016).

Google Scholar

Friedman, S., Dunwoody, S., and Rogers, C. (eds) (1999). Communicating Uncertainty: Media Coverage of New and Controversial Science. Mahwah, NJ: Lawrence Erlbaum Associates.

Google Scholar

Gardner, W., Lidz, C., and Hartwig, K. (2005). Authors’ reports about research integrity problems in clinical trials. Contemp. Clin. Trials 26, 244–251. doi:10.1016/j.cct.2004.11.013

CrossRef Full Text | Google Scholar

Goldman, A. (2001). Experts: which ones should you trust? Philos. Phenomenol. Res. 63, 85–110. doi:10.1111/j.1933-1592.2001.tb00093.x

CrossRef Full Text | Google Scholar

Göpfort, W. (2007). “The strength of PR and the weakness of journalism,” in Journalism, Science, and Society, eds M. Bauer and M. Bucchi (New York: Routledge), 215–226.

Google Scholar

Granado, A. (2011). Slaves to journals, serfs to the web: the use of the internet in newsgathering among European science journalists. Journalism 12, 794–813. doi:10.1177/1464884911412702

CrossRef Full Text | Google Scholar

Grasswick, H. (2010). Scientific and Lay Communities: earning epistemic trust through knowledge sharing. Synthese 177, 387–409. doi:10.1007/s11229-010-9789-0

CrossRef Full Text | Google Scholar

Griffin, R., and Dunwoody, S. (2016). Chair support, faculty entrepreneurship, and the teaching of statistical reasoning to journalism undergraduates in the United States. Journalism 17, 97–118. doi:10.1177/1464884915593247

CrossRef Full Text | Google Scholar

Hansen, A. (1994). Journalistic practices and science reporting in the British press. Public Underst. Sci. 3, 111–134. doi:10.1088/0963-6625/3/2/001

CrossRef Full Text | Google Scholar

Hardwig, J. (1985). Epistemic dependence. J. Philos. 82, 335–349. doi:10.2307/2026523

CrossRef Full Text | Google Scholar

Hardwig, J. (1991). The role of trust in knowledge. J. Philos. 88, 693–708. doi:10.2307/2027007

CrossRef Full Text | Google Scholar

Hewett, J. (2016). Learning to teach data journalism: innovation, influence, and constraints. Journalism 17, 119–137. doi:10.1177/1464884915612681

CrossRef Full Text | Google Scholar

Hoffman, J. (2016). Beetle Moms Send a Chemical Signal: ‘Not Tonight, Honey’. The New York Times Science Times. Available at: http://www.nytimes.com/2016/03/23/science/burying-beetle-sex-pheromones.html

Google Scholar

Iacoboni, M., Freedman, J., and Kaplan, J. (2007). This is Your Brain on Politics. The New York Times. Opinion Pages, Nov. 11. Available at: http://www.nytimes.com/2007/11/11/opinion/11freedman.html

Google Scholar

Illes, J., Moser, M. A., McCormick, J. B., Racine, E., Blakeslee, S., Caplan, A., et al. (2010). Neurotalk: improving the communication of neuroscience research. Nat. Rev. Neurosci. 11, 61–69. doi:10.1038/nrn2773

PubMed Abstract | CrossRef Full Text | Google Scholar

Ioannidis, J. (2005). Why most published research findings are false. PLoS Med. 2:e124. doi:10.1371/journal.pmed.0020124

PubMed Abstract | CrossRef Full Text | Google Scholar

Ioannidis, J., Munafo, M., Fusar-Poli, P., Nosek, B., and David, S. (2014). Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention. Trends Cogn. Sci. 18, 235–241. doi:10.1016/j.tics.2014.02.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Irion, R. (2015). Science communication: a career where PhDs can make a difference. Mol. Biol. Cell 26, 591–593. doi:10.1091/mbc.E14-03-0813

PubMed Abstract | CrossRef Full Text | Google Scholar

Jensen, J. (2008). Scientific uncertainty in news coverage of cancer research: effects of hedging on scientists’ and journalists’ credibility. Hum. Commun. Res. 34, 347–369. doi:10.1111/j.1468-2958.2008.00324.x

CrossRef Full Text | Google Scholar

John, L., Loewenstein, G., and Prelec, D. (2011). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychol. Sci. 23, 524–532. doi:10.1177/0956797611430953

CrossRef Full Text | Google Scholar

Kahan, D., Jenkins-Smith, H., and Braman, D. (2011). Cultural cognition of scientific consensus. J. Risk Res. 4, 147–174. doi:10.1080/13669877.2010.511246

CrossRef Full Text | Google Scholar

Kovach, B., and Rosensteil, T. (2001). The Elements of Journalism. New York: Three Rivers Press.

Google Scholar

Kühlberger, A., Fritz, A., and Scherndl, T. (2014). Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size. PLoS ONE 9:e105825. doi:10.1371/journal.pone.0105825

PubMed Abstract | CrossRef Full Text | Google Scholar

Lehrer, J. (2010). The Truth Wears Off. The New Yorker. Available at: http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off

Google Scholar

Lewenstein, B. (1992). The meaning of “public understanding of science” after World War II. Public Underst. Sci. 1, 45–68. doi:10.1088/0963-6625/1/1/009

CrossRef Full Text | Google Scholar

Lief, L. (2015). Science, Meet Journalism. You Two Should Talk. Wilson Quarterly. Available at: http://wilsonuarterly.com/quarterly/spring-2015-american-fissures/science-and-innovation-in-changing-newsroom/

Google Scholar

Martinson, B., Anderson, M., and de Vries, R. (2005). Scientists behaving badly. Nature 435, 737–738. doi:10.1038/435737a

CrossRef Full Text | Google Scholar

McConway, K. (2016). Statistics and the media: a statistician’s view. Journalism 17, 49–65. doi:10.1177/1464884915593243

CrossRef Full Text | Google Scholar

McDonald, D., Bammer, G., and Deane, P. (2009). Research Integration Using Dialogue Methods. Canberra: Australian National University E Press. Available at: http://press-files.anu.edu.au/downloads/press/p60381/pdf/book.pdf?referer=393

Google Scholar

Mindich, D. (1998). Just the Facts: How “Objectivity” Came to Define American Journalism. New York, London: New York University Press.

Google Scholar

Murcott, T. (2009). Science journalism: toppling the priesthood. Nature 459, 1054–1055. doi:10.1038/4591054a

CrossRef Full Text | Google Scholar

Nature. (2009). Cheerleader or watchdog? Nature 459, 1033. doi:10.1038/4591033a

CrossRef Full Text | Google Scholar

Nelkin, D. (1987). The culture of science journalism. Society 24, 17–25. doi:10.1007/BF02695570

CrossRef Full Text | Google Scholar

Neuroskeptic. (2010). The 9 Circles of Scientific Hell. Available at: http://blogs.discovermagazine.com/neuroskeptic/2010/11/24/the-9-circles-of-scientific-hell/

Google Scholar

Nguyen, A., and Lugo-Ocando, J. (2016). The state of data and statistics in journalism and journalism education: issues and debates. Journalism 17, 3–17. doi:10.1177/1464884915593234

CrossRef Full Text | Google Scholar

Nosek, B., Spies, J., and Motyl, M. (2012). Scientific utopia II: restructuring incentives and practices to promote truth over publishability. Perspect. Psychol. Sci. 7, 615–631. doi:10.1177/1745691612459058

PubMed Abstract | CrossRef Full Text | Google Scholar

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science 349, aac4716-1–aac4716-8. doi:10.1126/science.aac4716

CrossRef Full Text | Google Scholar

Petrie, H. (1976). Do you see what I see? The epistemology of interdisciplinary inquiry. Educ. Res. 5, 9–15. doi:10.3102/0013189X005002009

CrossRef Full Text | Google Scholar

Reed, R. (2001). (Un-) professional discourse? Journalists and scientists’ stories about science in the media. Journalism 2, 279–298. doi:10.1177/146488490100200310

CrossRef Full Text | Google Scholar

Rehmann, J. (2013). The need for critical science journalism. Guardian. Available at: http://www.theguardian.com/science/blog/2013/may/16/need-for-critical-science-journalism

Google Scholar

Rensberger, R. (2009). Science journalism: too close for comfort. Nature 459, 1055–1056. doi:10.1038/4591055a

CrossRef Full Text | Google Scholar

Resnik, D., Rasmussen, L., and Kissling, G. (2015). An International Study of research misconduct policies. Account. Res. 22, 249–266. doi:10.1080/08989621.2014.958218

CrossRef Full Text | Google Scholar

Ribeiro, R. (2007). The role of interactional expertise in interpreting: the case of technology transfer in the steel industry. Stud. Hist. Philos. Sci. A 38, 713–721. doi:10.1016/j.shpsa.2007.09.006

CrossRef Full Text | Google Scholar

Rose, N., and Abi-Rached, J. (2013). Neuro: The New Brain Sciences and the Management of the Mind. Princeton, NJ: Princeton University Press.

Google Scholar

Rossini, F., and Porter, A. (1979). Frameworks for integrating interdisciplinary research. Res. Policy 8, 70–79. doi:10.1016/0048-7333(79)90030-1

CrossRef Full Text | Google Scholar

Russell, C. (2009). Some optimism for the future of science journalism: and especially for international collaboration. Columbia J. Rev. Available at: http://www.cjr.org/the_observatory/some_optimism_for_the_future_o.php

Google Scholar

Schneider, J. (2008). Making space for the “Nuances of Truth”: communication and uncertainty at an environmental journalists’ workshop. Sci. Commun. 32, 171–201. doi:10.1177/1075547009340344

CrossRef Full Text | Google Scholar

Schooler, J. (2011). Unpublished results hide the decline effect. Nature 470, 437. doi:10.1038/470437a

CrossRef Full Text | Google Scholar

Schooler, J. (2014). Metascience could rescue the ‘replication crisis’. Nature 515, 9. doi:10.1038/515009a

CrossRef Full Text | Google Scholar

Schudson, M. (1978). Discovering the News: A Social History of American Newspapers. New York: Basic Books.

Google Scholar

Schudson, M. (2001). The objectivity norm in American journalism. Journalism 2, 149–170. doi:10.1177/146488490100200201

CrossRef Full Text | Google Scholar

Schudson, M., and Anderson, C. (2009). “Objectivity, professionalism, and truth-seeking in journalism,” in The Handbook of Journalism Studies, eds K. Wahl-Jorgensen and T. Hanitzsch (New York: Routledge), 88–101.

Google Scholar

Simmons, J., Nielson, L., and Simonsohn, U. (2011). False positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366. doi:10.1177/0956797611417632

CrossRef Full Text | Google Scholar

Society for Professional Journalists. (2014). Code of Ethics. Available at: https://www.spj.org/pdf/ethicscode.pdf; http://www.spj.org/rrr.asp?ref=77&t=ethics

Google Scholar

Summ, A., and Volpers, A.-M. (2015). What’s science? Where’s science? Science journalism in German print media. Public Underst. Sci. 25, 775–790. doi:10.1177/0963662515583419

CrossRef Full Text | Google Scholar

Thagard, P. (1997). Collaborative knowledge. Nous 31, 242–261. doi:10.1111/0029-4624.00044

CrossRef Full Text | Google Scholar

Thagard, P. (2006). How to collaborate: procedural knowledge in the cooperative development of science. South. J. Philos. 44, 177–196. doi:10.1111/j.2041-6962.2006.tb00038.x

CrossRef Full Text | Google Scholar

Utts, J. (2010). “Unintentional Lies in the media: Don’t blame journalists for what we don’t teach,” in Data and Context in Statistics Education: Towards an Evidence-Based Society. Proceedings of the Eighth International Conference on Teaching Statistics (ICOTS8, July, 2010), Ljubljana, Slovenia, ed. C. Reading (Voorburg: International Statistical Institute).

Google Scholar

Vos, T. (2011). ‘Homo journalisticus’: journalism education’s role in articulating the objectivity norm. Journalism 13, 435–449. doi:10.1177/1464884911431374

CrossRef Full Text | Google Scholar

Wilson, K. (2000). Drought, debate, and uncertainty: measuring reporters’ knowledge and ignorance about climate change. Public Underst. Sci. 9, 1–13. doi:10.1088/0963-6625/9/1/301

CrossRef Full Text | Google Scholar

Keywords: replication crisis, science journalism, reporting uncertainty, mass media ethics, interdisciplinary collaboration, collaborative knowledge, science communication

Citation: Figdor C (2017) (When) Is Science Reporting Ethical? The Case for Recognizing Shared Epistemic Responsibility in Science Journalism. Front. Commun. 2:3. doi: 10.3389/fcomm.2017.00003

Received: 18 October 2016; Accepted: 16 January 2017;
Published: 02 February 2017

Edited by:

Chris Russill, Carleton University, Canada

Reviewed by:

Bridie McGreavy, University of Maine, USA
Bruno Takahashi, Michigan State University, USA
Ashley Rose Kelly, University of Waterloo, Canada

Copyright: © 2017 Figdor. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Carrie Figdor, carrie-figdor@uiowa.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.