Skip to main content

CONCEPTUAL ANALYSIS article

Front. Commun., 17 September 2021
Sec. Science and Environmental Communication

The Epistemic Virtues of a Closed Mind: Effective Science Reporting in the Golden Age of the Con

  • 1Department of Philosophy, Florida State University, Tallahassee, FL, United States
  • 2Department of Humanities, Illinois Institute of Technology, Chicago, IL, United States

A financial confidence game (or “con”) aims to separate you from your money. An epistemic con aims to influence social policy by recruiting you to spread doubt and falsehood about well-established claims. You can’t be conned if you close your wallet to financial cons and your mind to epistemic cons. Easier said than done. The epistemic con has two elements. First are magic bullet arguments, which purport to identify the crucial fact that proves some well-established hypothesis is false. Second are appeals to epistemic virtue: You should be fair, consider the evidence, think for yourself. The appeal to epistemic virtue opens your mind to the con; countless magic bullet arguments keep it open. As in most cons, you (the mark or victim) don’t understand the game. You think it’s to find the truth. But really, it’s to see how long the con artist can string you along as his unwitting shill (an accomplice who entices victims to the con). Strategic Reliabilism says that reasoning is rational to the extent it’s accurate, easy to use, and practical (it applies to significant problems). It recommends that we give close-minded deference to settled science, and thus avoid a large class of epistemic cons. Settled science consists of the general consensus of scientific experts. These experts are defined not by their personal characteristics but by their roles within the institutions of science. Close-minded deference is not blind faith or certainty. It is belief that does not waver in the face of objections from other (less reliable) sources. When the epistemic con is on, the journalist faces a dilemma. Report on magic bullet arguments and thereby open people’s minds to the con. Or don’t, and feed the con artist’s narrative that evidence is being suppressed. As always, the journalist’s best response is sunshine: Report on the story of the epistemic con. Show people how they work. The story of the epistemic con has, at its heart, a wicked reveal: Your reaction to the story is itself part of the story, and it tells you whether the true villain of the story lurks within you.

Introduction

We live in the Golden Age of the Con. A con, or confidence game, employs techniques designed to deceive you. But not all cons want your money, at least not right away. Some want your mind. These are “epistemic” cons and they may be more dangerous and insidious than any financial con.

We’re familiar with financial cons. You play 3-card monte, and you might lose $20. You invest in a pyramid scheme, and you might lose your life savings. The goal of an epistemic con is to influence social policy. And it does that by recruiting you to be an unwitting shill (an accomplice who entices other victims to the con). Here’s how it works: Facts emerge that threaten the policy goals of a powerful group. They disseminate plausible but dishonest arguments. Convincing you to doubt or reject the facts is part of the con. But your opinion, by itself, won’t affect social policy. The con artist wins when you try to convince others, in person or via your favorite online medium. As soon as you retweet, share a Facebook post, or try to convince your family at Thanksgiving, you’ve become the con artist’s ideal shill: You have no dishonest intentions, you believe in what you’re peddling, even if what you’re peddling is doubt, and you don’t have to be paid.

This paper is for anyone who rebels at the idea of being a con artist’s sucker. There’s a myth that you can’t be cheated if you’re honest. It’s the title of a 1939 W.C. Fields movie, You Can’t Cheat an Honest Man. And in the 1967 film The Flim Flam Man, Mordecai Jones tells his protégé, “Only cheat the cheaters, boy. You can’t cheat an honest man.” Not only is this folk wisdom false, but it seems designed to benefit con artists. It gives them a false sense of righteousness and you a false sense of security. The only way to avoid falling for the con is to never trust anyone. And that’s no way to live.1 That’s why the first rule of avoiding the con is to admit you’re vulnerable: Know you can be conned. It doesn’t matter how smart or savvy you are. In fact, if you think that people who get scammed are dumber or more gullible than you are, you might stop and ask yourself why it’s called a confidence game. The con artist is looking for marks (victims) who are sure they’re too smart and too shrewd to be played for a fool. So admit that you can be conned because you have to trust people. The first rule of the con does not advocate a hopeless fatalism. Instead, it asks you to think of cons like car accidents. A sense of invulnerability makes you more vulnerable. And some knowledge can improve your chances. If you know that impairment, distraction, and impatience increase the likelihood you’ll wreck your car, this can help you to recognize when caution is in order. It’s the same with confidence games.

Learn to recognize when you’re in danger, and you can make it harder for the con artist to beat you.

To avoid being conned, don’t open your wallet to financial cons and don’t open your mind to epistemic cons. Sounds easy. It’s not. We’re going to recommend con-resistant heuristics, reliable (though not foolproof) rules for closing your mind to the con artist. The heuristics we’ll recommend derive from a basic rule that is easy to accept but deceptively hard to follow: Trust sources with more reliable track records, and don’t trust sources with less reliable track records. We might call this the Facebook Rule–because you’re a sucker if you’re still getting your news on Facebook. That’s not because your Facebook news feed carries stories that are mostly false. It’s because it’s not reliable enough to deserve your trust.

We open the paper by exploring why we live in the Golden Age of the Epistemic Con (The Golden Age of the Epistemic Con). We turn to how confidence games work and how to avoid them, paying particular attention to epistemic cons about settled science (Understanding Confidence Games and Strategic Reliabilism and the Virtues of a Closed Mind). We then address worries about our advice (Worries About the Close-Minded Deference Rule), and we explore why people fall for epistemic cons (Why Do We Fall for Epistemic Cons?). The general lesson is that people who fall for cons are just like you. In fact, they might be you. (If you find this hard to believe, we suggest you re-read the first rule for avoiding the con.) We conclude with some thoughts for the journalist who is trying to report on science in this Golden Age of the Epistemic Con (Effective Reporting in the Golden Age of the Epistemic Con).

The Golden Age of the Epistemic Con

Con artists have always been with us. But three facts about our modern world make this an ideal era for epistemic con artists.

1. Rampant Replication: The epistemic con is powerful because it turns the con artist’s marks into unwitting shills. When we lose money in a financial con, we don’t usually turn around and run the same scam on our friends and family. But when we fall for an epistemic con, we do–but without the con artist’s deceptive intent. A con artist convinces you that the theory of evolution is false, for example, and then you try to convince your friends and family of the same. The epistemic con replicates itself in its victims in a way that financial cons usually don’t.2 By breaking down barriers to the rapid spread of information, technology has made replicating cons easy. Fifty years ago, if you wanted news of the world, you had the first section of the local paper, the 6 o’clock news, and maybe a news weekly. But the con artist no longer has to bring their medicine show to your town or find a way to get a pamphlet into your hands. With wifi and a laptop, you have at your fingertips more news of the world than Walter Cronkite saw in a lifetime. When our sources of information proliferate, so do epistemic cons.

2. The Inevitability of Trust: To be scammed, you have to trust yourself (your ability to tell true from false) and the con artist (to be straight with you). We can’t be everywhere and know everything, and so to learn about the world, we have to trust other people (Hardwig, 1991). This is why epistemic con artists have always plied their trade in journalism. If you want to see con artists hard at work, just look at how American newspapers covered the election of 1800 or the run up to the Spanish-American War. Today epistemic con artists reach millions via radio and cable television. Some even get Presidential Medals of Freedom. When it comes to science, though, there’s an extra layer of trust that presents more opportunities for the con artist. Modern science is massive in scope and intricate in detail. It’s impossible to know it all. Even scientists are forced to trust the work of other scientists in allied subfields. No one comes to their opinions about settled science on their own, by thoroughly exploring and weighing the evidence and then reasoning rationally to a conclusion (Hardwig, 1985). Unless you’re an expert on a particular area of science, you’re not “thinking for yourself” on that topic. You’re trusting the judgments of other people–people who are deeply ensconced in the institutions of science and who employ methods whose reliability you don’t have the expertise to assess on your own.3 Given the extra layer of trust implicit in our judgments about science, it’s not surprising that con artists have always gravitated to peddling scientific bunkum.

3. Changes in how journalism is produced and consumed: The old business model for “slow” news called for hiring troops of reporters on a steady paycheck who could launch elaborate investigative stories. And it’s dying. Today, newsrooms are more dependent on free-lance journalists and seldom have the resources to produce deep dives into important subjects. As a result, journalism is increasingly vulnerable to the lure of “fast” news–the quick read, the flashy anecdote, the salacious scandal. Even White House coverage has adapted to the pressures of the ever-accelerating news cycle, with reporters tweeting out passing impressions. It’s tough to beat the con artist in an environment that prizes “fast” news that’s shallow, flashy, and persuasive. The explosion of news sources has also changed the way we consume news. We choose news sources we think are reliable or we let algorithms attuned to our biases choose them for us. And so we get stuck in “echo chambers.” Being stuck in a highly reliable echo chamber isn’t a bad thing. The problem is, everybody is absolutely positive that their echo chambers are full of honest and reliable sources while the echo chambers of folks who disagree with them are full of dishonest con artists.

The ideal environment for the epistemic con is an unregulated carnival full of overconfident marks who have no choice but to trust somebody. And that describes the news world we face when we turn on the TV or open our computers.

Understanding Confidence Games

In a short con, the goal is to take the money you carry. Lots of short cons are variations on old classics. In the Pigeon Drop, the con artist convinces you to add your money to a larger sum that’s stashed in an envelope. The con artist gives you a different envelope and then drops you, leaving you (the pigeon) with an envelope full of worthless paper. (The opening scene of the 1973 movie The Sting is a pigeon drop.) Another short con is 3-card monte. A dealer shows you a winning card (often a Queen) and two losing cards. The dealer then tosses the cards on a table, one at a time, face down, in a row. Your task is to keep track of the winning card as the dealer moves the three cards around. Picking the Queen seems easy. In fact, it is easy–until there’s money at stake and the dealer cleverly manipulates the cards so you choose a losing card.

To dodge the short con, there’s nothing like having some local knowledge. In London or New York City, avoid 3-card monte. In Beijing, avoid the people who want to practice English with you. Knowing how these and other cons work can help you spot and avoid them. But new cons (usually variants of old standbys) are being created all the time. Even students of the con can’t spot them all. Con artists are just too good. To repeat our car accident analogy: The ability to recognize danger is no guarantee you’ll avoid it, but it helps. If you want a guarantee, we know a Nigerian prince who can get you one, but you have to send him your banking information first.

In a long con, the goal is to take a large amount of money, more than you usually carry on your person. Long cons usually involve a team of con artists and a bogus investment (phony financial investments, sham gambling establishments, fake “inside” information). In the Spanish Prisoner, the mark pays an advance fee in order to receive a much larger payoff. This can be played as a long or short con. (Internet versions of this scam are popular. They include the Nigerian prince scam and lonely hearts scams.) The movie The Sting portrayed a long con known as the wire scam. Bernie Madoff ran a Ponzi Scheme (where early investors are paid directly with money from later investors) that lasted more than 15 years. And in 1925, Victor Lustig “sold” the Eiffel Tower. Twice.

To avoid long cons, do your due diligence before investing. And remember that long cons often depend on the mark’s greed. The people who were taken in by Bernie Madoff and Enron were more than eager to get other-worldly returns on their investments. The next time a long con makes the news, we’ll be reminded yet again that if an investment sounds too good to be true, it probably is.

The epistemic con recruits you to be an unwitting shill, with the ultimate goal of achieving some political or policy objective. The epistemic con artist recruits you using just two techniques. One opens your mind to the con, and the second keeps it open. These techniques are tried and true and they work hand-in-hand, each strengthening the other. To avoid falling for an epistemic con, you have to learn to close your mind to it. So we’ll offer examples of the techniques to help you recognize them. While we’re going to focus on epistemic cons about settled science, our points generalize to epistemic cons about any well-sourced news story.

Technique #1: Magic Bullet Arguments

In 1972, Woodward and Bernstein published a now-famous story in the Washington Post saying that the attorney general controlled a secret slush fund used to investigate political opponents. Their editors demanded confirmation from three independent sources before they’d publish the story (Woodward and Bernstein, 1974). This a good practice because it’s always possible that a single source might be wrong. Given the controversial nature of the story and that no other news outlet had confirmed it, the likelihood that a single source was wrong was sufficiently high that it would have been irresponsible to run the story. Confirm the story with a second independent source (a source that isn’t relying on the first source for their information), and the chances that both sources are wrong in the same way are much lower. With a third source, independent of the first two, it’s far more likely that the story is true than that all three sources would be confirming the same (false) story.

The epistemology of settled science works just like the epistemology of solidly sourced news stories. Some scientific ideas get confirmed by a wide array of different and credible scientific subdisciplines. When they do, these scientific ideas deserve our trust. For example, evolutionary hypotheses can be confirmed by the fossil record, biogeography, particle physics, mineralogy, stratigraphy, genetics, anatomy, continental drift, and more. The epistemic power of settled science does not come from one scientist coming to believe a scientific idea. Individual scientists are human, after all. They make mistakes. The power of settled science comes from the fact that it is confirmed by multiple and independent scientific subdisciplines (Trout, 1992; Trout, 1998). Widespread consensus among scientists who are experts in the field–they publish in peerreviewed journals and are awarded federally funded grants–is a good sign that a theory is multiply and independently confirmed.

What makes science “settled” is not mere agreement. Consensus, after all, can be the product of decades of de facto hegemony, such as a government’s imposition of a (truthdistorting) institutional view. Consider 1940s Lysenkoism in Russia–the corrupt statesponsored agricultural theory. Following Lamarckian biology–at the time, already widely rejected–Lysenkoism explained changes in plants and crops by a kind of phenotypic struggle modelled after class struggle. Stalinist Russia’s trenchant imposition of this view on scientists and the lay public was distinctly unlike the consensus characteristic of the core commitments of modern biology, chemistry, and physics. And yet, in Stalinist Russia, there was a kind of consensus around Lysenkoism. This consensus took hold in Russia and Eastern Europe, as well as China, from a combination of state-imposed ideology, experts’ fear of demonstrated punishment for adherence to theoretical alternatives, and Russian researchers’ isolation from Western research on genetics, which was denounced in Russia as “liberal pseudoscience.” It enjoyed a consensus, but its utter failure is an example of how an unconceptualized reality–in this case, facts of plant genetics–works against the truth-distorting forces of an errant ideological consensus.4 A con run by the state is still a con.

Truth is a value, of course, but so is representative participation in the fruits of settled science. And there is no necessary conflict here. The products of many settled sciences have not been distributed fairly, and various demographic swaths of the public have been mistreated or ignored in the administration of science and its media coverage (see Pezzullo, 2003).

No con artist is able to challenge in an intellectually honest way the powerful lines of overlapping but independent evidence that support settled science. The attempt to undermine those powerful lines of overlapping and independent evidence would be a massive and ultimately fruitless undertaking. The same evidential virtues of settled science hold for exceptionally wellsourced news stories. The con artist may try to wow you with a magic bullet argument—an argument that undermines the one crucial fact on which the entire edifice of settled theory rests–but you don’t tug on one main thread of modern chemistry without also unravelling much of the other science you will need to embrace if you are going to benefit honestly from chemotherapy, statins, microchips, or your electric car’s battery technology. Put aside the fact that most magic bullet arguments are dishonest, deeply confused, or both. Even when magic bullet arguments make a good point, they can’t succeed because there is no single piece of evidence on which settled science rests that can be “taken out” by a magic bullet. Like a good con, magic bullet arguments tend to be variations on the same themes. Here are some of them.

The Obvious Counterexample: Settled science has an absurd implication. Examples: Global warming implies that we won’t have cold snowy days in winter. Evolution implies that the second law of thermodynamics is false or the existence of half-human, half-ape fossils.

Cherry Picking: A line of evidence supporting settled science is disputed or proved wrong or a line of evidence opposing settled science is (or seems) plausible. Examples: 50 years ago, scientists said the Earth was cooling. Antarctica’s sea ice is expanding. Scientists disagree about the pace of evolution or the relative importance of natural selection as a mechanism driving evolution.

The Triviality (or Pooh-Poohing) Move: The settled science “merely” says something trivial. Global warming merely says that the Earth’s temperature changes over time. Evolution merely says that animals change via natural selection in response to their environment.

Biased Scientists: Settled science is widely accepted because scientists are blinkered by a particular ideology. Examples: Scientists believe evolution because they’re atheists, or they believe global warming because they’re socialists.

The Wild Analogy: Settled science is analogous to an absurd view. Examples: Evolution is like a 747 arising out of a scrap yard. Sea levels won’t rise with global warming because when ice melts in a glass, the water level in the glass doesn’t rise.

Once your mind is open con artists, they keep it open with magic bullet arguments. Lots and lots of magic bullet arguments. If one doesn’t pique your curiosity, maybe another one will. The proliferation of magic bullet arguments against settled science is a common feature of epistemic cons. One website identifies 198 “climate myths” put forward by global warming skeptics (Cook, 2021). And magic bullet arguments aren’t always easy to debunk. They can flummox experts, at least for a while. In the 1970s and ’80s, creationists like Duane Gish debated proponents of evolution, and the scientists sometimes lost those debates. The fact is, it’s not easy to disarm on the fly a perpetual stream of magic bullet arguments. After one debate that received national attention, the chemist Russell Doolittle is reported to have said, “I’m devastated… This was so important. How am I going to face my wife after making such a fool of myself” (Hilts, 1981).

Technique #2: Appeals to epistemic Virtue

A steady stream of magic bullet arguments can keep you busy for a long time. But how do con artists open your mind to these arguments? They appeal to your epistemic virtue: You’re fair. You’re open-minded. You’re smart. Think for yourself. Make up your own mind. There’s a difference between epistemic virtue and epistemic vanity. And it’s not always easy to tell the difference. It’s not easy to assess when it’s a virtue to think for yourself and when it’s a sign of overconfidence. And make no mistake: con artists are relying on our inability to tell the difference. They’re relying on our well-established tendency to be overconfident in our abilities (Moore and Healy, 2008).

The appeal to epistemic vanity opens your mind, and the steady stream of magic bullet arguments keeps your mind open. What’s powerful about the con artists’ appeal is that they seem to be asking for something modest, a fair shake: Keep an open mind until you’ve considered all the evidence. But “fairness” in this situation isn’t a virtue. As in any good con, you, the mark, think you understand the game, but you don’t. You think the game is to see if you can honestly figure out the truth. But the real game is to see how long the con artist can string you along. Because while you’re debunking one magic bullet argument, the con artist has lots of new ones to put on your “to do” list. Debunking magic bullet arguments becomes a game of whack-amole. You’re losing the real game as long as you doubt, as long as you stoke the controversy, as long you’re willing to be the con artist’s unwitting stooge. Here are a couple of examples of the epistemic con at work.

• Evolution is false. The con artist’s goal is to influence policy: that evolution should not be taught as settled science in the public schools. The con artist can’t win merely by convincing you that evolution is false or that the status of evolution is up for debate. The con artist wins by turning you into an unwitting shill–by getting you to use their magic bullet arguments on others. When enough people believe either that evolution is false or that its truth is up for debate, it’s difficult for evolution to be taught as settled science in the public schools.

• Vaccines cause autism. The goal of the con artist is to put policies in place that make it easier for children not to be vaccinated (e.g., policies that permit unvaccinated children to attend public schools). The con artist doesn’t win by convincing you that vaccines might be dangerous. The con artist wins if you spread alarm about the relative safety of vaccines. You can spread alarm by arguing that vaccines are unsafe or simply by casting doubt on their safety. Either way, the con artist wins.

In an epistemic con, the mark usually feels no sting. That’s why epistemic cons can, and sometimes do, last a lifetime. The main goal of epistemic cons is to manipulate socially coordinated action. A con artist who supports a policy or politician has two ways to get ahead: increase the number of supporters or decrease the number of opponents. The con artist can reduce opposition by sowing doubt (Oreskes and Conway, 2010). This is a big difference in the mechanics of financial cons and epistemic cons. The financial con artist needs you to believe his claptrap. If you don’t, you don’t reach for your wallet. If you have doubts about whether the 3 card monte game is fair, you probably don’t play. If you have doubts about the legitimacy of Bernie Madoff’s investments, you don’t sign away your life savings. But epistemic con artists win if they manage to sow enough doubt for you not to believe–or act on–the truth. Of course, epistemic con artists prefer your belief to your doubt. But that’s small potatoes. Their big payoff is recruiting you to shill for them, to spread their magic bullet arguments to others, who spread them to others, and so on. By spreading lines of polluted information, the con artist increases the number of their supporters (gets people to believe the hokum), decreases the number of their opponents (gets people to doubt the truth), or both (gets people to switch sides). And that’s how the epistemic con manipulates socially coordinated action.

So how do you avoid falling for a painless epistemic con? There are no guarantees. But your best bet is to give close-minded deference to settled science.

Strategic Reliabilism and the Virtues of a Closed Mind

You’re vulnerable to the epistemic con when you’re not an expert, and the marketplace is full of bad ideas. And when it comes to big controversies (evolution and global warming, for example) everybody agrees that the con is on. The disagreement is about who’s getting conned. What’s a wise layperson to do? We need a good rule we can apply to these situations. And any time you need a good reasoning rule, we think a good place to start is with Strategic Reliabilism.

Strategic Reliabilism is a view about epistemically good and bad reasoning. Explaining the view in detail could take a whole book. In fact, it did (Bishop and Trout, 2005). In a nutshell, Strategic Reliabilism says that a reasoning strategy is rational to the degree it’s reliable (it delivers an accurate representation of the world), easy-to-use, and practical (it applies to significant problems). For example, suppose you have to bet on each of Becky Hammon’s free throws over the course of her WNBA career. (This “forced bet” makes this problem significant. Otherwise, no offense to Becky Hammon, it might be wiser for you to use your limited cognitive resources elsewhere.) Hammon is a 90% free throw shooter. Here are two possible reasoning strategies.

Calibration Rule: Make the frequency of your hit and miss predictions equal to the frequency of Hammon’s hits and misses. So you predict “hit” 90% of the time and “miss” 10% of the time.

Simple Rule: Always predict Hammon will make the next free throw.

Strategic Reliabilism recommends the Simple Rule. The rules are tied on significance (they apply to the same problems and so they apply to equally significant problems). But the Simple Rule beats the Calibration Rule on ease of use and reliability. The Simple Rule gets the right answer at the same rate Hammon hits her free throws, 90%. The Calibration Rule gets the right answer 82% of the time. (Take 100 representative free throws, you’ll get 90% of the 90 hits right, and 10% of the 10 misses right.) It’s true that the Simple Rule is guaranteed to make mistakes whereas the Calibration Rule gives us a chance not to make any if we’re lucky. Of course, the Calibration Rule also gives us a chance for less than 82% accuracy if we’re unlucky! But going for the possibility of perfection is an illusion, a sucker’s game. Strategic Reliabilism recognizes that our willingness to accept the practical certainty of error is often the price of minimizing overall error (Einhorn, 1986). Giving close-minded deference to settled science will guarantee error. But it’s better than the alternatives.

Step 1: Scientific Judgments as Social Products. Scientific judgments that deserve our trust are not produced by individuals working in isolation. They are produced by individuals working within social institutions that are governed by norms and practices. The individuals come and go. The social institutions in which they work can last a long time.

Step 2: The Track Record of Social Institutions. We can evaluate the institutions that deliver judgments about complex matters in terms of 1) their track record and 2) the reliability of their norms and practices. For well over a century, when the institutions of science have coalesced around a consensus view, those views have been well-supported by the available evidence, and they have offered a reasonably accurate representation of the world. Individuals and institutions that produce ideologically-driven magic bullet arguments against settled science also have a long history. It is a long and unbroken litany of fallacy and confusion.

Step 3: Close-Minded Deference. When there is a controversy about settled science that pits the consensus judgments of science against the judgments of others, the correct epistemic attitude is a close-minded deference to science. Close-minded deference is not blind faith. When you give close-minded deference to (say) the theory of evolution, objections from less reliable sources don’t cause you to doubt it. You accept evolution even if you have no idea how to reply to those objections. Close-minded deference doesn’t mean absolute certainty. At one time, people rightly gave close-minded deference to the view that the continents are fixed. But we don’t any longer. Not because of “magic bullet” arguments that came from outside science or from the fringes of science. But because of powerful evidence-based arguments that arose within the (highly reliable) institutional context of science itself.

The Close-Minded Deference Rule allows no room for doubt about settled science–unless doubt arises within science itself. Does a magic bullet argument strike you as overwhelmingly plausible? Can you see no possible way for the defender of settled science to respond to it? It doesn’t matter. Keep a closed mind and defer. When you have a closed mind about some issue, you are permitted to ignore objections to it, but that doesn’t mean you’re required to ignore objections to it. After all, we’re not ignoring magic bullet arguments. The point is that if you’re going to consider magic bullet arguments, you should consider them with a closed mind. You might think of them as interesting puzzles to solve (like Sudoku) or as material for exploring bad reasoning about science. But they don’t raise doubts about settled science.

Worries About the Close-Minded Deference Rule

Strategic Reliabilism says that the Close-Minded Deference Rule is rational insofar as it applies to significant problems, is reliable, and is easy to use. Given that citizens in a democracy need to make good judgments about scientific questions, the significance condition is met. But a critic might have reservations about the other two conditions.

• The reliability worry: The rule may not be especially reliable given that it leads to false beliefs whenever settled science is false.

• The application worry: The rule may be difficult to apply insofar as it requires that we be able to reliably identify settled science.

Let’s take a look at why, despite these worries, settled science deserves your close-minded deference.

In discussing the scientific method, Paul Feyerabend once famously opined, “anything goes.” As he explains in his Preface to Against Method, ““anything goes” is not a “principle” I hold… but the terrified exclamation of a rationalist who takes a closer look at history” (1993, vii). We’ve taken a closer look at history, and we’re not terrified. And it’s not because we’re in denial. Feyerabend is right that if you dig into the nitty-gritty of science, you’ll discover plenty of disagreement, controversy, scandalous methods, and dreadful reasoning. (Of course, those of us who study science from afar are immune from such transgressions. Also, we can give you a very good price on the Eiffel Tower.) These sordid details will give you the vapors only if you’ve embraced a naïve view about how science works. Adopt a more realistic view, and you’ll see that the unsightly grit Feyerabend has uncovered is no call for terrified exclamations. It’s a reason to be confident about science.

The history of human belief is largely a story of myth and fantasy, of intuitive connections and magical relations. The dominant pre-scientific biology posited the existence of vital spirits as the distinctive force that animates living things. Medieval alchemy was an assemblage of mystical symbolism and psychological animism. And ancient medicine was a patchwork of home remedies and supernatural spells. It was carried out by people whose theoretical assumptions shifted with political favor and cultish practice, and who dispensed treatments that often led to suffering and death. Scattered among these superstitious conjectures were inventions that paved the way for more systematic investigations of nature. These included the calculus of Leibniz and Newton, the experimental method and design developed by Boyle in his study of gases and air pumps, and the statistical analyses of chance launched by Bernoulli. These tools, however, are powerless to uncover truths by themselves. They cannot extract insights when conjoined with grotesquely inaccurate views of the world. Computer scientists have an acronym for this idea–GIGO. Garbage In, Garbage Out.

At various points in ancient history, forms of atomism arose. At its simplest, it held that air consists of hard and round particles in constant motion, that physical and chemical interactions should be understood in terms of those particles, and that those particles, in living organisms, make up entirely physical, regenerative cells with properties organized by function. Early atomistic views were seriously mistaken. But when combined with the powerful tools of Leibniz, Newton, Boyle, Bernoulli, and others, those views were close enough to the truth that they managed to serve as a solid foundation for scientific progress (Boyd, 1983; Trout, 2016). This was the very beginning of settled science.

Today science is well launched. When operating efficiently, the institutions of science do two things. They generate promising new ideas, and they select the good ones and discard the (many) less-good ones. It would be difficult to generate diverse ideas if science were to operate solely by the application of pure reason to evidence generated by impeccable methods (Kitcher, 1990). And so there’s grit baked into the system. The institutions of science foster diversity by offering scientists outsized rewards for new and successful ideas. These rewards include grant money, prestige, and sometimes fame. Of course, scientists are driven by more high-minded motivations. But essential to generating new ideas are the less noble incentives, the external trappings, and the selfish motivations (Kitcher, 1990; Strevens, 2003). The sordid details of scientific practice that Feyerabend reckoned would destroy our tender illusions are, in fact, the inevitable price of diversity.

The drive to diversity is not enough. Also critical to science are its brutal winnowing norms and practices. Unfounded enthusiasm gets moderated in the peer review process, unproductive research programs desiccate without the lubrication of grant funding. Coincidental, lucky, or otherwise spurious correlations disappear under diverse tests, revealing their lack of robustness. The good ideas tend to survive, while the less-good ones get smothered by the direct effects of expert scientific review or by the cancelling effects of random error. Consensus views are sometimes legitimately criticized. But these criticisms are never magic bullet arguments. They don’t come from the fringes of science, and they don’t purport to show that settled science has made some whopper that’s evident to the non-expert. They’re criticisms that come from scientists, and they get rigorously evaluated using the norms, standards, and practices of science.

Illegitimate Worries

A common trick that opens your mind to the con is the appeal to mavericks: Today’s mavericks who criticize settled science and past mavericks who successfully toppled it. “People used to think that the Earth was at the center of the universe!” or “They laughed at Galileo, too!” Sometimes the appeals to present-day mavericks are deceptive. They aren’t really experts in the relevant field, they are experts but they abandoned the objection long ago, or they’re objecting to a feature of the consensus opinion rather than the consensus opinion as a whole. (For example, biologists disagree about the pace and mechanisms that drive evolution. Critics sometimes misinterpret these debates as calling into question the fact of evolution.) But sometimes mavericks really do dispute settled science. The point we non-experts need to keep in mind is that science (at least sometimes) gives these mavericks a fair hearing. Otherwise, we wouldn’t be able to point to all the mavericks who succeeded at overturning settled science: Copernicus, Galileo, Newton, Darwin, Einstein, and so on. The fact that mavericks get a fair hearing is part of the reason settled science deserves your close-minded deference. Also important is the fact that for every maverick we remember who turned out to be right, there are many, many more mavericks who turned out to be wrong and whose names we’ve never heard of Hydroxychloroquine, anyone?5

The Close-Minded Deference Rule relies on science being a social process that consists of norms and practices that have a long and stable history of efficiently selecting good ideas out of stacks of promising but ultimately less-good ideas. This process requires dissent, which means that settled science is seldom marked by universal consensus. The process also requires changes of opinion, which means that some of today’s settled science will be obsolete tomorrow. As we explore worries about giving close-minded deference to settled science, it will be useful to keep in mind that dissent and change are crucial to the effective operation of the scientific enterprise. Only by embracing incoherent myths about science can we suppose that it improves without changing, or that it doesn’t evaluate new ideas but still manages to generate new ideas that are good. Incoherent myths benefit the con artist.

The Reliability Worry

“Settled science” is properly applied to the core commitments of our most successful, diversely tested, and unified theories–the broad commitments and practices found in widelyused textbooks. It is not a term used to apply to cutting edge research based on new theories that employ intellectually risky techniques. That means that, while there are the inevitable boundary disputes, not all assertions can be transformed into boundary questions. In settled science, there is usually a consensus among experts in the field about the reliability of those core commitments associated with settled science. Controversy about the accuracy of a theory’s basic commitments is a bad sign. At the same time, science itself is much more than settled science. The expanse of science covers the very pioneering fringes of research, often rife with speculative claims. So we distinguish “settled science” from those pioneering fringes.

There is no doubt that communities sometimes force the perception that a science is settled–as Freudian psychodynamic psychoanalysis once did, and the governmental mechanisms did for Lysenkoism in Stalinist Russia. It is equally clear that communities sometimes prematurely treated as settled issues that were at the pioneering fringes of an otherwise reliable theory. That specific portion of belief is less reliable, and so it is not especially surprising that it makes riskier predictions and grounds shakier policies. This still leaves the core commitments–the “settled” part of settled science–intact. So, uncertainty about an issue related to settled science–whether manufactured by ideological groups or not—does not feed doubts about the reliability of its core commitments. Even if there is uncertainty about where COVID-19 emerged, there is no significant scientific controversy about what COVID-19 is and whether vaccines treat it.

For this reason, Strategic Reliabilism does not recommend a giddy enthusiasm for all things science. It does not entail that you believe everything a scientist says, or license everything a scientist does. Defenders of settled science can reject the idiosyncratic beliefs of particular scientists (especially those beliefs outside of the scientists’ domain of expertise) just as they can consistently reject the policies that scientists and others sometimes falsely believe that settled science implies. After all, many people, scientists included, irresponsibly apply preliminary findings from the uncertain fringes of research to broad policy proposals. Consider an example. The settled science at the foundations of modern medicine, along with the procedures of medical practice, unquestionably increase both the length and quality of human life. The health benefits of antibiotics, vaccines, statins, and other routine medical interventions like routine appendectomy, cancer treatments, and pharmaceutical cocktails for HIV, are beyond dispute. This nearly monolithic consensus–together with the still merely aspirational hope for equal access to health care typical in democracies–reflects the idea that we all could benefit from modern medical care, and that not benefitting from it, whether by compulsion or choice, harms people. That is why most everyone now holds two beliefs at once: 1) the treatment of men of color in the Tuskegee Syphilis Studies was morally deplorable, and 2) all marginalized publics should have equal access to basic health care treatment. The co-existence of these two beliefs shows the general public implicitly distinguishes between the moral failings in the history of medical policy and the overwhelming reliability of medical science. The task for policy professionals is to acknowledge the source of concern in marginalized communities, while reinforcing the proper status of the institutions of science.

This process of reinforcement goes beyond the well-tested foundations of settled science, but policy research over the last 30 years is promising. Many people who are wary about health and environmental policies are not wary of the underlying science itself, but mistrustful that it will be used well or responsibly. Research on policy formation and communication has demonstrated the positive effects of increasing democratic participation for policy, in particular when alternatives to harmful actions can be widely and openly discussed. Mini-publics can be formed, for example, to discuss the nature and desirability of such policy interventions. These “citizen panels” can be assembled from a stratified sample of citizens of every possible and relevant demographic group of stakeholders, thus insuring that all threatened or otherwise marginalized groups are recognized (for just two examples, see Ackerman and Fiskin, 2004; Goodin and Dryzek, 2006).

Granted, ideology takes hold and the boundary between mistrust of applied policy and mistrust of settled science sometimes leaks. We see this in vaccine skepticism and in sweeping, contested policy proposals that react to global warming. In situations of uncertainty, the precautionary principle is usually recommended: When an activity threatens human health or the environment, even if some cause-and-effect relationships are not fully established scientifically, caution should be exercised (Cox, 2006, p. 326). When these potential harms are at stake, this principle has the burden-shifting effect that it is the proponent of the activity that has to prove the safety of their actions. For example, given what is known about the spread of contagious diseases, the burden falls on anti-vaxers to prove that choosing to not get an available and effective vaccine will not harm others. As a tool to unhand the public from the grip of commercial science and its products, emancipatory activism can direct attention to the mirage of scientific infallibility (Craig, 2016; von Essen, 2017). But on other issues, such as vaccination, the history is more mixed, and the global health benefits of a standardly-tested vaccine far outweigh the damage done by risking a paralyzing doubt in a portion of the public who will get sick and infect others. Liberty is always assessed together with costs of that liberty to others. When those costs are hard to determine, the precautionary principle is invoked. Procedures of consensus formation that pander to ignorance are not emancipatory.

Some of today’s settled science will turn out to be false, and so close-minded deference to settled science guarantees error. But Becky Hammon and prediction models teach us that sometimes the best way to minimize error is to guarantee some error. This point, however, doesn’t put the reliability worry to rest. Philosophers disagree about whether science is roughly true (realism) or not (antirealism). One might think that the rational status of the Close-Minded Deference Rule depends on how this contentious debate between realists and antirealists turns out. If settled science is mostly false, then the rule is going to be unreliable. It’s going to make our Facebook feed look incredibly accurate by comparison. So in order to make the case for close-minded deference, don’t we also have to make the case for scientific realism? No, we don’t.

To see why close-minded deference doesn’t require scientific realism, let’s compare Strategic Reliabilism to standard reliabilism about justification. The latter view holds that a belief is justified just in case it’s produced by a belief forming mechanism that meets some threshold of reliability. “More than 50% truths” is the typical threshold (Goldman, 1979). Strategic Reliabilism commits us to none of this. For one thing, it says nothing about beliefs. Strategic Reliabilism evaluates reasoning strategies. And for another, Strategic Reliabilism rejects the idea of reliability thresholds. A rule that delivers 52% accuracy at predicting Becky Hammon’s free throws would be a terrible rule. But a Sweet 16 Rule for predicting who’ll win an annual 16-team tournament that’s 52% accurate might be a great rule. If the beliefs produced by those two rules count as “epistemically justified” because they produce more than 50% truths, so much the worse for epistemic justification. Strategic Reliabilism isn’t interested in divvying up beliefs into the justified and the unjustified. It’s interested in telling you how you should think about the world: Use the rule that delivers 52% accuracy on the hard (Sweet 16) problem but not on the easy (free throw) problem.6

A scientific realist and antirealist will disagree about how reliable the Close-Minded Deference Rule is. But as long as we avoid thresholds, these reliability judgments don’t fix the rational status of the rule. Strategic Reliabilism recommends close-minded deference to settled science because it’s the best rule we have. That doesn’t mean it’s highly reliable. It just means it’s more reliable than alternatives to it. And this is why sophisticated realists and antirealists can, should, and usually do give settled science their close-minded deference. Settled science delivers views that best fit the available evidence and that give you the best chance to accurately represent the world. Whether the best chance we have is good (realism) or rather slim (antirealism) doesn’t matter.

Application Worry #1: Value Judgments

The Close-Minded Deference Rule is a rule is for coming to scientific judgments about the way the world is. It’s not a rule for coming to value judgments about the way the world ought to be. The rule can tell us what medicines do, but it can’t tell us how (or whether) those medicines ought to be distributed. It can tell us the effects of alcohol, tobacco, and drugs on individuals and on society, but it can’t tell us whether the government ought to restrict their use. It can tell us about the effects on people of industrial chemicals such as BPA (bisphenol A, used in plastic bottles), but it can’t tell us whether the government ought to restrict their use. It can tell us that we’re causing the Earth to heat up and what the effects of various mitigation policies might be, but it can’t tell us how we ought to respond to global warming (Intemann 2017; de Melo-Martín and Intemann, 2018).

Scientific claims can, do, and should play a role in reasoning to value and policy judgments. But there is no easy argumentative route from claims about settled science to policy claims. For one thing, close-minded deference is not certainty. From “settled science says BPA is safe at low levels” it doesn’t follow that “it’s certain that BPA is safe at low levels.” How confident should we be about any particular instance of settled science? This is a complex issue, one that the CloseMinded Deference Rule does not address. If we had a simple rule for making subtle probability judgments on the basis of complicated evidential situations, we’d have told you about it by now. But even if we had such a rule, it still wouldn’t deliver policy prescriptions. The inferential gap between “there’s a high chance that BPA at low levels does not harm individuals” to “millions of water bottles in the environment, each with low levels of BPA, does not harm individuals” is massive. And the inferential gap between that general empirical claim to the normative claim “we should not ban water bottles with low levels of BPA in them” is at least as massive. To suppose that the Close-Minded Deference Rule gives us license to leap these massive inferential gaps is to misunderstand the rule.

Application Worry #2: Identifying Settled Science

Identifying settled science isn’t trivial. Reflect on the propositions of science, and the settled ones don’t conveniently glow with the light of reason or the inspiration of the gods. It would be a mistake, however, to suppose that we’re innocent greenhorns when it comes to identifying consensus judgments. Suppose you want to know what the weather’s going to be like tomorrow, how to set up an S corporation, what car seat is best for your toddler, or what time the Super Bowl starts. What do you do? You go to a weather app, an experienced lawyer, a consumer protection agency, or the sports page. In other words:

1. You identify judgments produced by individuals (often unknown to you),

2. who work within social institutions that are governed by certain judgment-producing norms and practices,

3. where those norms and practices have a long track-record of reliability.

Your investigation can go wrong in all sorts ways. A weather app might be unreliable. Competent lawyers might not have reached a consensus about the best way to set up an S corporation. A website you think is run by a consumer protection agency is actually a deceptive ad. The sports page had the right time, but you misread it. And so on. But we usually navigate these challenges with few problems. So is there some reason to think that identifying consensus judgments in science is more difficult than identifying such judgments about the weather, the law, car seats, or the start time for the big game? One possible difference is that issues in science are seldom fully settled.

Disagreement is part of the background hum of science. There are contrary and vexatious characters in science who keep controversies alive. As we’ve noted, mavericks who aggressively press minority opinions within the established conventions of science are essential to the healthy operation of science. Even so, the Close-Minded Deference Rule recommends that we nonexperts ignore them. Not because mavericks are always wrong. They’re not. We ignore them for the same reason we ignore the fact that Becky Hammon sometimes misses a free throw. It’s our best bet. As soon as we think we can identify the unlikely exception–Hammon missing a free throw or a maverick being right and settled science being wrong–three bad things happen. We degrade our ability to accurately represent the world. We increase our cognitive workload. And we open our minds to the con.

Another reason to think that it’s difficult to identify settled science is that scientific questions are far more complex and abstruse than “When does the game start?” or “Is it going to rain tomorrow?” And so it’s bound to be more difficult to identify consensus in science. This worry rests on a confusion. The scientific questions that are important for citizens in a democracy to answer are usually pretty straightforward: Is the Earth warming? If so, is it mostly due to human activity? Do species, including humans, evolve? Does smoking cause cancer? Do vaccines cause autism? Do masks prevent the spread of covid-19? These questions are not hard to understand. Of course, it’s often fiercely hard to understand the scientific evidence and how it supports or undercuts various answers to those questions. But that’s the whole reason for giving close-minded deference to settled science! If the evidence for global warming were easy to sift, we wouldn’t need a deference rule. We could figure out the right answers on our own.

There are some points to keep in mind if we’re going to apply the Close-Minded Deference Rule wisely. It’s helpful to have realistic views about how science works, so that we don’t get caught off guard by the role of dissent and change in science. It’s also important to use good judgment in identifying experts. Experts in science are always experts in a particular scientific field. More importantly, the rule does not ask you to defer to an individual with doctorate and a fancy academic position. It doesn’t even ask us to defer to a group of individuals with doctorates and fancy academic positions. It asks us to defer to judgments produced by individuals who are embedded in a particular social institution (Bishop, 2005). Experts are always entrenched in the social institutions of science: they publish in established, peer-reviewed journals, they’re awarded federally funded grants, and they serve as editors, referees, and officers of established scientific organizations. (For an extended defense of the idea that a person with wifi and a high school education can reliably identify settled science, see Anderson, 2011).

The biggest barrier to giving settled science our close-minded deference is not that science is hard, perfect consensus is rare, or settled science changes. It’s that we lack epistemic discipline. We see that scientific consensus is sometimes wrong or that Becky Hammon sometimes misses a free throw, and we reckon we have the insight to spot the exceptions. Or we see that a maverick rejects convention, and we reckon we have the acumen to pass judgment on the maverick’s arguments. Armed with this groundless faith, soon enough we’re exploring the molecular structure of cilia to settle the evolution question. Or we’re studying the electrical conductivity of ice cores to establish the bona fides of global warming. It’s fine, of course, to learn about cilia and ice cores. But it’s a mistake for non-experts to take ourselves to be in an epistemic position to resolve such questions. The danger of this sort of epistemic conceit goes beyond the possibility of being played. Open your mind in cases of settled science, and you’re contributing to an epistemic environment that breeds confidence games. No one wants to live in a country full of marks who’ve fallen for the epistemic equivalent of buying the Eiffel Tower, because the predictable result is social policy that is both unjust and injurious.7

So applying the rule isn’t always easy. But as long as you understand the rule, it’s not that hard, either. Frankly, for most of us, figuring out what kind of car best fits our needs is a tougher epistemic challenge than figuring out what most scientific experts think about whether species evolve or whether the Earth is warming (Anderson, 2011).

Why Do We Fall for Epistemic Cons?

We haven’t been shy about identifying people who’ve fallen for epistemic cons. If you have doubts about evolution or global warming or the relative safety of vaccines, you’ve been conned. And when you try to persuade others of your views, you’re perpetuating the con. That’s not to say that you’re a con artist or that you’re being dishonest. In fact, just the opposite. Your honorable intentions are part of the con you’re innocently but unfortunately perpetuating. We have to admit, however, that our conclusions have not been ideologically balanced. In most (but not all) cases, the people we’ve identified as marks tend to be politically conservative. If this makes you feel angry and insulted or if it makes you feel superior and vindicated, you’re still misunderstanding epistemic cons. Falling for an epistemic con is not a sign of ignorance or irrationality.

Think about the people you’re sure have been conned on some scientific issue. It doesn’t matter what side you take on the issue. Suppose you could plot the reasoning skills of the people you disagree with and the people you agree with. The reasoning skills of both groups would run the gamut, of course. But the two profiles would look pretty much the same. The people who agree with you are not better critical thinkers than the people who you disagree with you. Being a good reasoner doesn’t predict what your views will be when the con is on. It seems instead to predict that your views will tend to be extreme in one direction or the other (Kahan, 2013). The same point holds for scientific literacy. When the con is on, science literacy very weakly predicts laypeople’s views on settled science (Allum et al., 2008). And so the people who agree with you are about as knowledgeable about science as the people who disagree with you.

When the con is on, what predicts people’s views is not rationality or knowledge. It’s political ideology. But we don’t think the best explanation for this involves some people being more blinded by political ideology than others. We suspect that the connection between ideology and being conned is indirect. To see this, let’s briefly recap the story of the con: We find ourselves at the end of complex information streams, and these streams are transmitted through social institutions. These institutions are governed by their own methods, norms, and practices. Some of these social institutions are reliable, and others are less (and sometimes much less) reliable. We want to suggest that ideology explains why you’re introduced to some epistemic cons and not others. And since the only cons we fall for are the ones we’re introduced to, our ideologies predict which cons we’ll tend to fall for.

The story of how ideology directs some cons your way is a familiar one. A group or institution disseminates magic bullet arguments. Those arguments are likely to be broadcast by media organizations that share the values that underwrite those arguments. And so ideology explains why some streams of information get polluted. Follow those streams of information, and let’s ask: Why do some people get hooked up to polluted streams of information? We choose sources of information we think deliver true and useful facts. And what we take to be true and useful is, in part, a function of our values. The connection between our values and our sources of information is real but imperfect. Nobody subscribes to Scientific American because they dig the magazine’s politics. But media organizations end up with established audiences that tend to reflect their values. And that’s why you might get anti-vaxx cons in your newsfeed while someone else gets anti-global warming cons in theirs.

None of what we’ve said should suggest that ignorance, ideology, and irrationality play no role in epistemic cons. They do. The point is that they don’t play a special outsize role in why people fall for a con.

• Ignorance: No one can be an expert about all science. That’s why we have to trust others, and that leaves us open to being conned. Ignorance about science is like breathing: It’s part of the background condition of being human that makes us all vulnerable to the con. It doesn’t follow that getting conned means that you’re more ignorant about science (or more deprived of oxygen) than the average person.

• Ideology: Ideology explains why cons get distributed the way they do. It explains why you’re introduced to some epistemic cons and not others. It doesn’t follow that falling for an anti-vaxx con means that you’re more blinded by ideology than the average person.

• Irrationality: Falling for an epistemic con involves accepting bad (magic bullet) arguments. But if occasionally falling for bad arguments makes a person irrational, we’re all irrational (Kahneman, 2013). As we’ve seen, the evidence suggests that people who fall hardest for the con are actually better-than-average reasoners. It’s possible that once you’ve opened your mind to the con, it takes a good deal of intellectual dexterity to reason yourself to an extreme position.

Falling for the con is not a matter of being ignorant of science, blinded by ideology, or a bad reasoner. If you think it is, you don’t really believe you’re vulnerable to epistemic cons. You don’t really believe the first rule of the con. To suppose that people fall for epistemic cons because they’re ignorant or irrational is not only false, it’s insulting and counterproductive. It’s hard enough to convince someone they’re wrong. But good luck trying to convince them they’re wrong because their ignorance and irrationality makes them easy suckers. Leveling this unjustified insult is guaranteed to both anger your interlocutor and undermine your credibility.

So why do people fall for epistemic cons? We fall for epistemic cons for the same reason we fall for financial cons: We trust the wrong people. But there’s more to it. There is a personal failure involved in falling for an epistemic con about settled science. It’s a failure of intellectual humility. It’s a deep overconfidence in our ability to suss out the truth.8 And this failure permeates our current intellectual environment. It makes us reject the Rule of Close-Minded Deference on the grounds that we have the insight to resolve scientific disagreements despite our lack of expertise. Our preening epistemic confidence causes us to give lip service to the first rule of the con, but to believe, deep down, that we’re too smart and savvy to get played for suckers. Our epistemic arrogance presents itself in sharp relief when we give insulting explanations for why others are conned, and when we respond to the allegation that we have been conned with deep offense.

Effective Reporting in the Golden Age of the Epistemic Con

The journalist with the power to reach a huge and trusting audience is a particularly juicy mark for the epistemic con artist. Even if journalists won’t open their minds to the con, perhaps they will report facts in a way that opens their audience’s minds to the con. If the journalist writes stories that convince readers that there’s a genuine issue, the journalist becomes the con artist’s unwitting accomplice. It might seem easy for the journalist to avoid this trap. But it’s not. The journalist faces a dilemma. Either report on the con artist’s fake controversy or don’t. Both options seem to play into the con artist’s hands.

Report on thecontroversy”: When we communicate a fact, our audience automatically draws conclusions about what we’re saying. And their conclusions sometimes go beyond what we intend to report. Consider, for example, the Gricean principle of relevance: People assume that if a person says something, they do so to advance the goals of the discussion. Suppose you ask, “Is the cat in George’s room?” and I reply, pointing to George who is looking under the sofa, “George doesn’t seem to think so!” You wouldn’t interpret me as changing the subject from where the cat is to where George thinks it is. You’d interpret me as advancing the goals of our discussion—that is, you’d interpret me as answering your question as to the whereabouts of the cat (i.e., probably not in George’s room). So if a journalist documents that (say) global warming is settled science and then reports on the views of deniers, the audience will interpret the story as reporting on a genuine debate. Otherwise why would the reporter mention it? And the con artist gets what they want: the journalist inadvertently opening audience’s minds to the con.

Don’t report on thecontroversy”: The wiser option for the journalist is to ignore magic bullet arguments and report clearly on the facts as represented by our best science. But this option has two drawbacks. The first is the possibility of “boomerang” effects. Parents of adolescents will appreciate that certain messages can produce contrary behavior. Studies suggest that certain antismoking and anti-litter messages can lead people to smoke and to litter more (Reich and Robertson, 1979; Wolberg, 2006). There is evidence that giving scientific facts to people who doubt them can produce more polarized opposition to those facts (Nyhan and Reifler, 2010; Hart and Nisbett, 2012, but see; Skitka et al., 2000). Second, ignoring the “controversy” stoked by epistemic con artists might backfire by feeding the narrative that powerful institutions (such as the “mainstream” media) are suppressing the “truth.” Even with these drawbacks, not reporting on the controversy seems like the lesser of two evils.

For its success on ignorance or irrationality. It depends on epistemic conceit. Give close-minded deference to settled science and you don’t need great knowledge or brainpower to elude epistemic cons.

This dilemma uncovers an underappreciated feature of epistemic cons. Once a controversy is widely known, the epistemic con becomes self-perpetuating. When it comes to topics like (say) global warming, most people in the Western world know there’s some sort of controversy, even if they don’t know the details. Journalists, it seems, can’t win. They feed the narrative that there’s a controversy whether they engage with it or not.9

Is there a way out of this dilemma? Can the responsible journalist respond to the manufactured controversy in a way that doesn’t play into the con artist’s hands? As far as we know, there’s only one easy-to-use technique for helping people to avoid confidence games: sunshine. Show people how they work. You’re less likely to fall for a con if you know how it works—whether it’s 3-card monte in New York, English students in Beijing, a skeevy “news” story on your Facebook feed, or an anti-vaxxer argument from the Children’s Health Defense group. And so our advice is to tell the story of the epistemic con. Clearly explain its basic framework, how it’s worked in the past, and how it works today. Most everyone accepts the story of the epistemic con—as long as it’s applied to others. The challenge comes when you apply the story to your readers. The journalist can predict the blowback. Expect magic bullet arguments, accusations of bias (or worse), and a knee-jerk “turning the tables” reply: “You’re the dupe. You’re the one who’s been conned.” But it’s right here that you can see the power and the beauty in the story of the con. The loud and angry protestations aren’t objections to the story. They’re an essential part of the story.

The story of the epistemic con has been told many times. But it’s usually told in a way that allows the true villain of the story to escape unnoticed. The villain is not the magic bullet arguments or the people who promote them. It’s our epistemic conceit, our lack of epistemic humility. Tell the true story of the con, and you’ll see the villain clearly in our reactions to the story. The villain is in our harsh judgments of those who’ve fallen for the con, and it’s in our offended dismissals of anyone who has the temerity to suggest that we’ve been conned. The true story of the epistemic con has at its heart a wicked reveal: It gives each of us an opportunity to see what our actual role in the story has been. More importantly, it gives each of us an opportunity to choose for ourselves the role we want in the story going forward.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1The great Ricky Jay put this point as follows: “You wouldn’t want to live in a world where you couldn’t be conned… Because it would mean you’re living in a world where you never trusted anyone or anything… The element of the con is trust. You’re giving trust… That’s what you provide. To live without it is to be suspicious of every single thing that goes on” (Haberman, 2013).

2Usually, but not always. Epistemic cons are like pyramid schemes. The fact that we unwittingly run the con on our family and friends is itself part of the con.

3Also focusing on trust, reliability and settled science, Oreskes (2019) arrives at similar conclusions by routes through the history of science and science studies.

4The epistemic status of consensus has received much attention in the philosophy of science literature—particularly in the voluminous “robustness analysis” research. Although much of this literature presses the point that the possibility of an otherwise reliable methodology is compatible with its actual reliability (Wimsatt, 1981; Trout, 1998; Bishop and Trout, 2005; Wimsatt, 2007; Stegenga and Menon, 2017), this literature takes very seriously that identifiable intellectual standards have survived the many ways that even expert consensus has gone wrong.

5This reference will be familiar to many readers living through the ongoing coronavirus pandemic of 2020-1. This footnote is necessary because future readers will have heard of Galileo but not Didier Raoult. Hydroxychloroquine is an anti-malarial drug that President Donald Trump repeatedly recommended in 2020 as an effective treatment for covid-19. His recommendations were based on highly questionable studies conducted by the maverick physician and microbiologist, Didier Raoult. As of this writing, the overwhelming scientific consensus is that Raoult’s hypothesis is false.

6Here’s an underappreciated difference between the theories. Reliabilism about justification has a problem providing a principled way to determine which “mechanism” counts as the one whose reliability is relevant to the justificatory status of a belief. This is the generality problem (Goldman, 1979; Feldman, 1985; Bishop, 2010). But Strategic Reliabilism doesn’t suffer from the generality problem because it’s not in the business of evaluating beliefs. It eludes the generality problem in the simplest, most obvious way possible: If you want to know what mechanism is the right one for evaluating the epistemic status of a belief, build a theory that tells you what the epistemically good and bad belief-forming mechanisms are. That’s what Strategic Reliabilism does. If you reason rationally (irrationally), the resulting belief is the product of good (bad) reasoning. If good reasoning can lead to unjustified beliefs or bad reasoning can lead to justified beliefs, so much the worse for justification.

7As we’ve noted, good reasoning about science (by, for example, using the Close-Minded Deference Rule) does not guarantee that we’ll come to good policy conclusions. But bad reasoning about science makes coming to good policy conclusions far more difficult.

8Even if it should turn out that people who fall for epistemic cons are on average more irrational or ignorant than those who don’t, this would not much alter our conclusions. And that’s because the epistemic con doesn’t depend

9The journalist’s dilemma is perfectly captured by Derek Thompson, a staff reporter for The Atlantic, in his recent coverage of the Covid vaccine falsehoods spread by social media vector Alex Berenson. Thompson puts his fix in this way: “To be honest, I initially had serious doubts about publishing this piece. The trap of exposing conspiracy theories is obvious: To demonstrate why a theory is wrong, you have to explain it and, in doing so, incur the risk that some people will be convinced by the very theory you’re trying to debunk. But that horse has left the barn. More than half of Republicans under the age of 50 say theysimply won’t get a vaccine. Their hesitancy is being fanned by right-wing hacks, Fox News showboats, and vaccine skeptics like Alex Berenson. The case for the vaccines is built upon a firm foundation of scientific discovery, clinical-trial data, andreal-world evidence. The case against the vaccines wobbles because it is built upon a steaming pile of bullshit.” (The Atlantic, April 1, 2021).

References

Ackerman, B., and Fishkin, J. (2004). Deliberation Day. New Haven, CT: Yale University Press.

Allum, N., Sturgis, P., Tabourazi, D., and Brunton-Smith, I. (2008). Science Knowledge and Attitudes Across Cultures: a Meta-Analysis. Public Underst Sci. 17 (1), 35–54. doi:10.1177/0963662506070159

CrossRef Full Text | Google Scholar

Anderson, E. (2011). Democracy, Public Policy, and Lay Assessments of Scientific Testimony. Episteme. 8 (2), 144–164. doi:10.3366/epi.2011.0013

CrossRef Full Text | Google Scholar

Bishop, M., and Trout, J. D. (2005). Epistemology and the Psychology of Human Judgment. New York, NY: Oxford University Press.

Bishop, M. A. (2005). The Autonomy of Social Epistemology. Episteme. 2 (1), 65–78. doi:10.3366/epi.2005.2.1.65

CrossRef Full Text | Google Scholar

Bishop, M. A. (2010). Why the Generality Problem Is Everybody's Problem. Philos. Stud. 151 (2), 285–298. doi:10.1007/s11098-009-9445-z

CrossRef Full Text | Google Scholar

Boyd, R. N. (1983). On the Current Status of the Issue of Scientific Realism. Erkenntnis. 19 (1-3), 45–90. doi:10.1007/978-94-015-7676-5_3

CrossRef Full Text | Google Scholar

Cook, J. (2021). Global Warming & Climate Change Myths. Available at: https://skepticalscience.com/argument.php.

Google Scholar

Cox, R. (2006). Environmental Communication and the Public Sphere. Thousand Oaks, CA: Sage.

Craig, G. (2016). Political Participation and Pleasure in Green Lifestyle Journalism. Environ. Commun. 10 (1), 122–141. doi:10.1080/17524032.2014.991412

CrossRef Full Text | Google Scholar

de Melo-Martín, I., and Intemann, K. (2018). The Fight against Doubt: How to Bridge the Gap between Scientists and the Public. New York: Oxford University Press.

Einhorn, H. J. (1986). Accepting Error to Make Less Error. J. Personal. Assess. 50 (3), 387–395. doi:10.1207/s15327752jpa5003_8

CrossRef Full Text | Google Scholar

Feldman, R. (1985). Reliability and Justification. The Monist. 68, 159–174. doi:10.5840/monist198568226

CrossRef Full Text | Google Scholar

Goldman, A. (1979). “What Is Justified Belief?,” in Justification and Knowledge. Editor G. Pappas (Dordrecht: Reidel).

Google Scholar

Goodin, R. E., and Dryzek, J. S. (2006). Deliberative Impacts: The Macro-Political Uptake of Mini-Publics. Polit. Soc. 34 (2), 219–244. doi:10.1177/0032329206288152

CrossRef Full Text | Google Scholar

Haberman, C. (2013). Straight Talk from a Professor of Flimflam (But Don’t Ask His Age). New York, NY: The New York Times. Available at: https://www.nytimes.com/2013/04/22/nyregion/straight-talkfrom-a-professor-of-flimflam-but-dont-ask-his-age.html.

Hardwig, J. (1985). Epistemic Dependence. J. Philos. 82, 335–349. doi:10.2307/2026523

CrossRef Full Text | Google Scholar

Hardwig, J. (1991). The Role of Trust in Knowledge. J. Philos. 88, 693–708. doi:10.2307/2027007

CrossRef Full Text | Google Scholar

Hart, P. S., and Nisbet, E. C. (2012). Boomerang Effects in Science Communication. Commun. Res. 39 (6), 701–723. doi:10.1177/0093650211416646

CrossRef Full Text | Google Scholar

Hilts, P. J. (1981). Science Loses One to Creationism. Washington, DC: The Washington Post. Available at: https://www.washingtonpost.com/archive/politics/1981/10/15/science-loses-one-tocreationism/9961cfce-fecf-43f9-8e7c-5f875bb448f4/.

Intemann, K. (2017). Who Needs a Consensus Anyway? Addressing Manufactured Doubt and Increasing Public Trust in Climate Science. Public Aff. Q. 31 (3), 189–208.

Google Scholar

Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment Decis. Making. 8 (4), 407–424.

Google Scholar

Kahneman, D. (2013). Thinking Fast and Slow. New York: Farrar, Straus and Giroux.

Kitcher, P. (1990). The Division of Cognitive Labor. J. Philos. 87 (1), 5–22. doi:10.2307/2026796

CrossRef Full Text | Google Scholar

Moore, D. A., and Healy, P. J. (2008). The Trouble With Overconfidence. Psychol. Rev. 115 (2), 502–517. doi:10.1037/0033-295x.115.2.502

PubMed Abstract | CrossRef Full Text | Google Scholar

Nyhan, B., and Reifler, J. (2010). When Corrections Fail: The Persistence of Political Misperceptions. Polit. Behav. 32, 303–330. doi:10.1007/s11109-010-9112-2

CrossRef Full Text | Google Scholar

Oreskes, N., and Conway, E. (2010). Merchants of Doubt. New York: Bloomsbury.

Oreskes, N. (2019). Why Trust Science? Princeton, NJ: Princeton University Press.

Pezzullo, P. C. (2003). Resisting “National Breast Cancer Awareness Month”: the Rhetoric of Counterpublics and Their Cultural Performances. Q. J. Speech. 89 (4), 345–365. doi:10.1080/0033563032000160981

CrossRef Full Text | Google Scholar

Reich, J. W., and Robertson, J. L. (1979). Reactance and Norm Appeal in Anti-Littering Messages. J. Appl. Soc. Pyschol. 9, 91–101. doi:10.1111/j.1559-1816.1979.tb00796.x

CrossRef Full Text | Google Scholar

Skitka, L. J., Mosier, K., and Burdick, M. D. (2000). Accountability and Automation Bias. Int. J. Human-Computer Stud. 52 (2000), 701–717. doi:10.1006/ijhc.1999.0349

CrossRef Full Text | Google Scholar

Stegenga, J., and Menon, T. (2017). Robustness and Independent Evidence. Philos. Sci. 84 (3), 414–435. doi:10.1086/692141

CrossRef Full Text | Google Scholar

Strevens, M. (2003). The Role of the Priority Rule in Science. J. Philos. 100 (2), 55–79. doi:10.5840/jphil2003100224

CrossRef Full Text | Google Scholar

Trout, J. D. (1998). Measuring the Intentional World: Realism, Naturalism, and Quantitative Methods in the Behavioral Sciences. New York: Oxford University Press.

Trout, J. D. (1992). Theory-Conjunction and Mercenary reliance. Philos. Sci. 59, 231245. doi:10.1086/289664

CrossRef Full Text | Google Scholar

Trout, J. D. (2016). Wondrous Truths. New York: Oxford University Press.

von Essen, E. (2017). Whose Discourse Is it Anyway? Understanding Resistance Through the Rise of “Barstool Biology” in Nature Conservation. Environ. Commun. 11 (4), 470–489. doi:10.1080/17524032.2015.1042986

CrossRef Full Text | Google Scholar

Wimsatt, W. (2007). Re-engineering Philosophy for Limited Beings: Piecemeal Approximations to Reality. Cambridge, MA: Harvard University Press.

Wimsatt, W. (1981). “Robustness, Reliability, and Overdetermination,” in Scientific Inquiry and the Social Sciences. Editors M. B. Brewer, and B. E. Collins (San Francisco: Jossey-Bass), 124–163.

Google Scholar

Wolberg, J. M. (2006). College Students’ Responses to Antismoking Messages: Denial, defiance, and Other Boomerang Effects. J. Consumer Aff. 40, 294–323.

Google Scholar

Woodward, R., and Bernstein, C. (1974). All the President’s Men. New York: Simon & Schuster.

Keywords: epistemology, science reporting, global warming, social epistemology, confidence game, strategic reliabilism, close mindedness

Citation: Bishop MA and Trout JD (2021) The Epistemic Virtues of a Closed Mind: Effective Science Reporting in the Golden Age of the Con. Front. Commun. 6:545429. doi: 10.3389/fcomm.2021.545429

Received: 30 March 2020; Accepted: 02 September 2021;
Published: 17 September 2021.

Edited by:

Tarla Rai Peterson, The University of Texas at El Paso, United States

Reviewed by:

Cristi Choat Horton, Tarleton State University, United States
Deborah Cox Callister, University of San Francisco, United States

Copyright © 2021 Bishop and Trout. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: J. D. Trout, jtrout@iit.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.