- Department of Philosophy, University of Liverpool, Liverpool, United Kingdom
In this paper I argue that rumors pose a challenge to effective science communication. I also argue that it is sometimes reasonable for ordinary laypeople to trust rumors over the experts. The argument goes like this. There are strong fact-value entanglements in the sciences. Further, my friends and neighbors may be more likely than the experts to make value judgments that line up with my own. As such, it can make sense for me to pay close attention to their testimony. It may even make sense for me to trust testimony within my peer network—or “rumors”—more than the experts, especially if the experts' values are especially opaque or suspicious to me. I ground this discussion in the recent West Africa Ebola outbreak, where rumors posed a substantial challenge to containing the epidemic.
Introduction
In this paper I argue that rumors pose a challenge to effective science communication. I also argue that it is sometimes reasonable for ordinary laypeople to trust rumors over the experts. The argument goes like this. There are strong fact-value entanglements in the sciences. Further, my friends and neighbors may be more likely than the experts to make value judgments that line up with my own. As such, it can make sense for me to pay close attention to their testimony. It may even make sense for me to trust testimony within my peer network—or “rumors”—more than the experts, particularly if the experts' values are especially opaque or suspicious to me.
Throughout this paper, I use “rumor” to mean information passed through informal networks of communication, usually through very many sources, and which we typically receive from members of our social circles. That is, from friends, neighbors, relatives, fellow pub-goers, etc. Rumors are essentially “unofficial” (Coady, 2006a, p. 48); that is, there is nothing to underwrite their credibility (Goldman, 2011, p. 99). This is in stark contrast to the sciences, where checks like peer review underwrite the credibility of scientific testimony (Longino, 1990). Rumors are a form of peer testimony, as opposed to expert testimony.
When I use the term “expert” I mean credentialed members of mainstream scientific communities, excluding members of fringe groups within the sciences. I also use “expert” to describe envoys of the scientific community, such as medics, who are not “scientists” per say, but still enjoy credentialed epistemic authority relative to laypeople; and from whom we receive much of the scientific testimony that we use in our lives (Epstein, 1996, p. 6). This initially seems to be a strictly social definition of expertise, but underpinning it are more epistemic and scientific considerations; such as Goldman's (2011) view that an expert ought to possess “a substantial body of truths in the target domain” (p. 91) and Longino's (1990) position that the community of mainstream science provides checks that its members are likely to have substantial bodies of truths about their targets. Of course, there will be outlier cases, where members of fringe scientific communities are actually right about a particular topic (think Galileo), and cases in which those who are right aren't scientists at all (think of early AIDS activists in the 1980s Epstein, 1996). But this is a paper written from the perspective of ordinary epistemic agents who are not members of the scientific community, but who still need to make use of science in their day-to-day lives. They receive information from many different sources, and for the purposes of discussion we need to demarcate those sources. Some of the material people get is from official sources of science communication—the experts—while they get other information from their peers. Real life is messier and won't quite map on to this picture, but it is useful for the sake of discussion to think about people receiving information in at least these two ways.
A final piece of clarification before proceeding; which is what is meant by “reasonableness.” By this term, I don't mean that it is strictly rational for people to pay more attention to peer testimony than to expert testimony. Rather, I mean that we can understand their reasons for doing so, and those reasons make sense. It is important to remember that in the cases we are interested in here, agents are not engaged in strictly epistemic exercises. They need to make judgments about what to do for themselves and those they love, often under extremely difficult circumstances: do I take my sick sister to the Ebola treatment center? Do I get an HIV test? Do I vaccinate my child? These situations involve all-things-considered judgments, and so it is unsurprising that the appropriateness of those decisions extends beyond epistemic considerations.
This discussion will take place in the context of the recent West African Ebola outbreak (roughly 2013–2016). This example shows clearly the disruptive force of rumors to the effectiveness of science communication, and in a situation where the stakes are high; trying to contain a deadly infectious disease. The Ebola case is also complicated and multi-faceted. There were obviously substantial challenges to containment that had nothing to do with rumors or science communication, such as the difficulty of dealing with an epidemic in a health context where resources are already over-stretched (Walsh and Johnson, 2018). I am not arguing that rumors were the only source of containment problems in the Ebola epidemic. Rather, I use the example of Ebola to ground the philosophical discussion on rumors, trust and expertise.
The paper will be strucured as follows. It starts with a brief introduction of the Ebola case, focussing on the role of rumors. I then discuss reliance on experts and fact/value entanglements in the sciences, paying close attention to the substantial ways in which these are intertwined. I argue that if you have reason to be suspicious of the values used in the production and/or communication of science, it might be reasonable to trust your peers over the experts. I conclude by responding to some challenges to my position. Notably, that my peers don't know anything about science, and that rumors are inherently untrustworthy.
The Ebola Case
The West African Ebola epidemic ran roughly from 2013 to 2016, covering Guinea, Liberia and Sierra Leone (the so-called “Ebola Triangle”), with 28,646 confirmed cases and 11,308 confirmed deaths (World Health Organization, 2016; Hofman and Au, 2017). There had been Ebola outbreaks in Africa before, but they were smaller; typically occurring in geographically isolated locations, with small populations, and the disease's quick progression and high mortality rate limited its spread (Hofman and Au, 2017, p. 15–16; Richards, 2016, p. 15). This time it was much bigger, resulting in a large-scale international intervention; including from the WHO (World Health Organization), MSF (Médecins Sans Frontières), various national militaries, and non-profits. These interventions were met with resistance of varying degrees in all three countries. The most extreme resistance was in Guinea, where there were frequent violent attacks on clinics and ultimately, in 2014, the murder of eight members of a medical team to prevent them from gaining access to the community (Fairhead, 2016, p. 9). But there was also resistance in Sierra Leone and Liberia, including failing to report to clinics when ill, and hiding the sick from medical teams, which was a problem for containment efforts. It is widely accepted that the epidemic would have ended more quickly if it had not been for this resistance (Wilkinson and Fairhead, 2017, p. 14).
In all three countries there was substantial science communication; including biomedical information about the disease, its symptoms, and what to do if you or your loved ones presented with symptoms (Chandler et al., 2015). However, there were also rumors in all three locations, and these rumors drove resistance. Rumors included accusations that medics were spreading the disease rather than treating it, that the disease was a plot by governments to take out troublesome marginal groups, that it was a genetically engineered disease from the West designed to kill Africans, and that medical teams were stealing the bodies of the deceased to sell their organs as part of the international organ market (Fairhead, 2016; Abramowitz et al., 2017; Desclaux et al., 2017; Wilkinson and Fairhead, 2017).
The role of “science journalism” in all of this is complicated. There was both international and local coverage of the epidemic (see Zhang and Matingwina, 2016 for discussion of international coverage). In terms of local science journalism, the picture is murky; with some more clearly aligned with official sources of science communication, and others closely resembling rumors. Newspapers remain an important source of information in West Africa. Many newspapers are independent and small and rely on sensational stories to keep sales up. The stories published during the epidemic are similar to the rumors discussed above (Halsey, 2016). It makes sense to think of these small independent newspapers as close to peer testimony, given their proximity to communities and the publication of content that allied with what communities were already saying. There are also more established media outlets in the region, who saw their role as accurately conveying information about the disease, and they partnered with national and international health authorities to do this (Halsey, 2016; Legault, 2017). These latter outlets are thus closely aligned with official sources of science communication (and may be the official sources of science communication itself). So we see the demarcation between rumors and official sources of science communication replicated in the media outlets.
That rumors were able to gain traction over official science communication in many instances (although, not all) suggests that the rumors were often trusted more than the official sources of science communication. This is a place where social epistemology and philosophy of science can help the discussion, not in diagnosing the phenomenon—that is the realm of the social sciences—but in understanding whether such behavior can be reasonable.
Epistemic Divisions of Labor, Experts, and Rumors
We rely on divisions of epistemic labor in our normal social lives. No one of us can individually know everything we need to know to get by, and so we divide the epistemic tasks between us (Kitcher, 1990; Goldberg, 2010). For this useful practice to succeed, we need to be able rely on each other to access this information as we need it. This is arguably the rationale for the creation of experts, and it is typically accepted that we (in our capacity as laypeople) should defer to the experts on the topics on which they are expert (Kitcher, 2011). Philosophical debate gets heated when there is expert disagreement or if it isn't clear who the experts are Goldman (2011), Coady (2006b), and Jones (2002), but within this literature it is often presumed that you should defer to the experts, and the challenge is identifying who the experts really are.
By contrast, peer testimony is often discredited. Rumors are taken to be especially poor sources of information—Coady (2006) describes rumors as a form of “pathological testimony”. Goldman (2011) takes rumors to be so obviously bad that he uses that as a premise in an argument against finding information endorsed by many as more credible than that which is only endorsed by a few (p. 99). Gelfert (2013) in a rare defense of rumors, argues that it can make epistemic sense to pay close attention to rumors when there are no official sources of information available (perhaps you live under a dictatorial regime in which trustworthy information is scarce), or if the rumor provides a first encounter with a new piece of information you wouldn't otherwise have accessed.
Rumors in the Ebola case are nothing like the instances of acceptable rumors that Gelfert suggests—that is, when there is no good information available, or when rumors offer a first point of contact for a new piece of information. Science communication meant there was plenty of good official information about Ebola available, and while rumors may have been a first contact point about Ebola, they need not have been.
One philosophical response to the Ebola case might be that those who paid attention to rumors about the disease were obviously irrational and thus not philosophically interesting. However, the phenomenon of trusting rumors over the experts is a much broader than just Ebola. Steinberg (2016) details similar rumors in South Africa in the mid-2000s when HIV testing was first being rolled out so that HIV positive people could access treatment. Steinberg describes an account of one MSF doctor to illustrate the challenge that rumors posed to effective implementation of the treatment scheme in South Africa:
The white MSF doctor who spearheaded the programme introduced himself to the local population by traveling from village to village personally conducting voluntary HIV tests. Within weeks, rumors circulated that the doctor was spreading HIV in his needle. It was said that he was an agent of the old apartheid regime; that a conspiracy was in progress to infect so many black people with AIDS that whites would become an electoral majority. In one village, a hostile crowd confronted the doctor with this accusation. The tension only abated when he stood up on a table, drew his own blood in front of the assembled crowd and explained as best he could the science behind the tests that would be done on it (Steinberg, 2016, pp. 66-67) (Steinberg, 2016, p. 66–67).
Further, trust in rumors over the experts is broader than cases of infectious diseases in African countries. Consider the role of rumors in anti-vaccination movements in North America and Europe. Again, this is a subject on which there has been considerable science communication, and yet rumors about a causal connection between the MMR vaccine and autism persist (Goldenberg, 2016, p. 553; Kitta and Goldberg, 2017, p. 509). Paumgarten (2019), in his coverage of recent measles cases in a Jewish Orthodox community in New York, notes that information passed through informal community networks was instramental to the vaccine opposition, ultimately resulting in this particular outbreak. In Paumgarten's words:
People often talk about the anti-vaccination movement as a social-media phenomenon, but in the ultra-Orthodox community, where women are discouraged from using computers and smartphones, it has apparently spread mostly among mothers by word of mouth, through phone trees, leaflets, and gatherings: still viral, but analog. “It's more about social networks than social media,” Gellin, of the Sabin Vaccine Institute, said (Paumgarten, 2019).
Given how wide-spread the issue of trusting rumors over science communication is, it would be too quick to dismiss it as obviously irrational. In the next section, I recount the familiar philosophical view that science is imbued with values. I do this to provide a values-based justification for trusting the rumors over the experts.
Fact/Value Entanglements
It is now widely accepted that the sciences are thoroughly value-laden. Values are used to select which scientific projects to pursue, which methods to use, how to interpret data, and how to communicate findings to various users of science; amongst other decision points at which values are invoked (Longino, 1990; Douglas, 2009; Kitcher, 2011). It is also accepted that many of these values are social and political, rather than strictly epistemic, cognitive, or scientific. However, as Matthew Brown points out, what exactly these values are and how we get the values right is currently under-theorized in the Philosophy of Science literature (Brown, 2018, p. 6). It is well-beyond the scope of this paper to tackle this question. However, there has been substantial work showing that there are values in science, and that is all I need for this argument to proceed.
One example, from very many, that shows values right in the very heart of scientific work is the role that values played in the selection of theories of AIDS's etiology in the early 1980s. At the point when AIDS deaths were first reported, there were many theories about what was causing AIDS, but the two strongest contenders were the “microbial account”—that AIDS was caused by a yet-to-be discovered microbe—and the “immune overload theory”—that AIDS was a lifestyle disease caused by factors associated with the 1980s urban American gay scene, most specifically, drug use and repeated STD exposure. Initially, the immune overload theory was seen as the most plausible, because of how exotic the “gay lifestyle” appeared to researchers (Epstein, 1996, p. 56). This was a scientific judgment of the relative plausibility of theories based on social/political values, even if the researchers were unaware that they were doing it.
Much philosophical effort has gone into figuring out what to do about the value-ladeness of the sciences. Suggestions have been made that the values should be made more transparent to the public, that opportunities for public engagement with the values should be available (Kitcher, 2011; de Melo-Martin and Intemann, 2018), or that efforts be made to remove values from the sciences entirely (Bright, 2018 articulating Du Bois's position). But it is often unclear how these suggestions would be implemented in practice.
One problem is that sometimes the facts and values are so entangled that it is impossible to remove the values without unraveling the whole scientific enterprise. Elliot (2017) uses the language of a “tapestry of values” to illustrate how deeply entwined the facts and the values can be. Sometimes, strategies for increased value-transparency, or efforts to include the public in value choices in the sciences, won't be able to succeed. This is because the decision points at which value judgments are invoked are often too technical and too numerous to be practically explicable to people without relevant expertise. Reiss's (2017) work on fact-value entanglements in economics makes this especially clear. One example he uses to make this point is that of the development of consumer price indices—that is, the measure of how much a fixed basket of goods costs over time. Just one value-based decision point in this metric is how to weigh the purchasing activities of various households—do we weigh the standard purchases of the poor more than the rich, or do we weigh every household the same? These decisions have practical implications for the uses of the metric, and various individuals and groups will benefit (or not) depending on these choices. However, being able to see the value judgements that are made when choosing between household weightings requires economic expertise, thus making it difficult, if not impossible, to explain to the public. And there are many other such value-laden technical decision points in the development of just this metric. Cases like this indicate that calls from philosophers to make the values more transparent or to have more value-based public-participation are not always feasible. Of course, sometimes value transparency will be easy—perhaps I am a social scientist and a card carrying member of the Labor Party (in the UK), I could just declare my political affiliation and that would go some way to making the values transparent.
When I (as a lay member of the public) am presented with the messy mass of facts and values, I may not be able to unravel the values from the facts. Further, as indicated above, I have good reason to believe that the whole process of producing the sciences is imbued with these values. This is fine if I believe the values involved are neutral, or that they are close to my own, or if I suspect that they are values that I would endorse, but what about situations in which I am suspicious of the values? In situations like this, value judgments made by my peers may be more trustworthy to me than those made by the scientists, and as such, I may trust their testimony more.
Kitcher (2011) hints at something like this in Science in Democratic Society, when he argues that one of the key ways that members of the public become disillusioned with the sciences is when there are “opaque value judgments” at work in the production of science (p. 155). He goes further to explain that he means cases in which the value judgments aren't transparent to the public (i.e., they are “opaque”), and where the public suspect that the values run counter to those that would have been chosen by a democratic process. His solution is to have more democratic involvement in science. I think Kitcher's assessment of opaque value judgements is correct, but I diverge with him in two ways. First, I think that opaque value judgments are a problem because individuals suspect the values run counter to their own values, not necessarily those that would be selected via a democratic process. Second, I think that democratic solutions to the problem of opaque value judgments are often unavailable, because the value-based decision points are technical and numerous (as discussed in Reiss's example from economics). Instead, I think that the problem of opaque value judgments means that it can make sense for me to trust my neighbors more than the experts, because my neighbor's values are clear to me and are likely to align to my own.
An obvious initial objection to my position is that this is a very complicated way to go and that ordinary laypeople are not aware of intricate intertwining between facts and values that Reiss and Kitcher describe. However, although ordinary members of the public might not be able to articulate this problem the way that the philosophers do, they are aware that something like this is afoot. When someone who is suspicious of a mainstream climate scientist makes a comment like “of course he would say that, he is a Birkenstock-wearing, granola-eating hippy,” they are articulating a suspicion about values to question a scientific claim.
Further, recent psychological work in Cultural Cognition argues that individuals do in fact trust experts whose values more closely align with their own. A classic example of this is that one's political affiliation is typically closely aligned with one's views on climate science (Kahan, 2016). Social epistemologists have paid attention to this development in psychological research, but have argued that this is a problem in need of remedy. McKenna (2019) argues that we should recognize this as potential source of bias in our reasoning-processes and make appropriate efforts to mitigate it. Ballantyne (2019) argues that this aspect of our psychology is likely to make us poor judges of who to trust in cases of expert disagreement (p. 237). However, once we recognize how thoroughly value-laden the sciences are, trusting those whose values we share seems like a reasonable thing to do.
At this point we should pay closer attention to the type of testimony we receive from our peer network vs. the type of testimony we receive from experts, because this will help to clarify who we are trusting on what kinds of issues.
Peer Testimony vs. Expert Testimony
Testimony From Experts
The debate about whether scientists can deliver value free testimony is well-established, at least since Rudner (1953) introduced the problem of inductive risk. This issue has been further developed to take account of scientists acting as policy advisors (Steele, 2012). But this is further complicated once we acknowledge how baked into the scientific process values can be—even if one can find a value neutral way of articulating the products of science, the values will be there in the way that the science has been produced. The picture is even more complicated when we look at the way that ordinary laypersons receive testimony about science; as opposed to policy makers, science journalists, or other kinds of elites.
Often the information that is conveyed to ordinary members of the public reflects not just the values that we typically find in the sciences, but also various value based policy decisions that have already been made and are then conveyed as all-things considered judgments about what one ought to do. One example of this is the medical advice that individuals receive when they test HIV positive; which is that they should immediately start anti-retroviral therapy (ARVs). This reflects a broader policy commitment to “treatment as prevention,” and is a very recent development in HIV/AIDS treatment protocols. Previously, there were extensive medical debates about how low one's CD4 count1 should be before beginning treatment. Importantly, when someone receives the piece of information that they are HIV positive and thus ought to go on treatment immediately, what they are receiving is heavily value-laden (both from the sciences and the health policy makers) and may run counter to the individual's own value-judgments (Seckinelgin, 2020). Perhaps the individual would rather delay starting treatment—being on ARVs can be unpleasant, it can be expensive depending on where one lives, and it may result in needing to go on more severe regimens sooner than if initial treatment had been delayed. This problem is not unique to HIV/AIDS, but occurs in many spheres in which policy decisions underpin science communication, such as in clinical guidelines.
The point of this sub-section is that often scientific information is given to us (in our capacity as laypeople) in the form of all-things-considered judgments about what to do, even if the philosophical ideal may be that we receive the “facts” and make our own value-laden all-things-considered judgments about what to do with that information.
Testimony From Peers
Earlier in this paper, I discussed epistemic divisions of labor and how this gives rise to experts. However, it is also the case that we have something like epistemic divisions of labor within our peer networks. My neighbors and friends may have already encountered situations that I am only just encountering for the first time. They may have done their homework on that issue, and successfully navigated their way through a difficult course of action. Importantly, they may have done this from a social position that is very much like my own, with values like my own and which I know are like my own.
One example of this is pregnancy and place of birth. There is significant debate over what the evidence shows about the safety of giving birth outside of an obstetric unit. McClimans (2017) and de Melo-Martin and Intemann (2012) argue that interpretations of the evidence are value-dependent. Issues such as acceptable levels of risk, and risk to the mother versus risk to fetus are value-laden. In this context, if I were pregnant and making decisions about where I plan to give birth, I might talk to my friends with children about where they gave birth, why they made those decisions, and what those experiences were like. This is a kind of epistemic division of labor, but when I consult my friends for advice on place of birth, I am knowingly receiving all-things considered judgments about what to do in my situation.
Thinking more carefully about the testimony one gets from experts and the testimony one gets from peers should further strengthen the core argument of this paper: trusting testimony I receive from my friends and peers over that of experts can sometimes be a sensible strategy. It is often the case that both sets of testimony are value-laden. However, at least I know that my friends' values reflect my own. In the following sections I will consider two objections to my view: (1) that rumors are inherently unreliable, and (2) that my neighbors' know nothing about science.
The Inherent Unreliability of Rumors
A concern about trusting rumors is their epistemic status—how likely are they to be true? This is a point of some controversy. There is some experimental evidence that the sheer number of individuals involved in rumor networks diminish their accuracy; each new agent in the network introduces a point at which errors can be made and incorrect information is shared (Allport and Postman, 1947 cited in Coady, 2006a, p. 50). Think of the children's game “Broken Telephones”, in which children transmit messages down a row and see whether the message at the end of the line is the same as the one at the start. It almost never is. As such, you really shouldn't ever trust the rumors because there are inherently extremely poor sources of information.
There are also arguments to the contrary. Notably, some argue that real-world rumors (as opposed to their experimental counterparts mentioned above) are likely to be accurate, because “plausibility” is a selection pressure on whether they are transmitted (Coady, 2006a, p. 45–47). Think of a “Marketplace of Ideas,” but for rumors—only the plausible rumors get passed on. Further, in real-world conditions, individuals may have opportunities to judge the credibility of the sources from whom they receive information (Coady, 2006a, p. 50).
However, debating whether or not rumors are good sources of literally true information is beside the point—how much of the testimony in our social lives is about conveying literally true information to our peers? This links to broader issues in epistemology about the status of testimony, in which “[d]iscussion [in epistemology] is restricted to cases in which the speaker's utterance is meant literally, rather than rhetorically, playfully, figuratively, fictionally, or ironically” (Adler, 2017). As Adler (2017), points out, this is a move made for “brevity,” rather than reflecting how testimony actually works. Most of our actual testimony is looser than the accurate conveyance of true information, but it does other kinds of work.
Consider Davis's (2017) recent popular book Post-Truth: Why we have reached peak bullshit and what we can do about it. In it, he argues that there are large swathes of testimony that are not literally true, and that we know are not literally true, but which are useful at conveying other sorts of information. To borrow an example from Davis (2017), when you go to a dinner party and the host burns the tart for dessert, you would never agree with the host that it is burnt. Instead you say that it is “delicious” and “perfect.” Everyone knows that this isn't true. But you are conveying to everyone at the party that you are polite, and know how to behave at dinner parties (Davis, 2017, p. 99). Something similar happens in untrue testimony at the political level. In another example from Davis (2017), when Donald Trump makes speeches in which he states that unemployment is currently at 35–40%, this number is so wildly out of step with the actual figures, that it shouldn't be taken as a piece of testimony attempting to convey literal truth. Instead it is meant to convey sympathy for the working classes, and the exaggeration helps to do that (Davis, 2017, p. 32).
Rumors have this same quality. Even if they aren't literally true, story-telling and exaggeration help to more clearly express the central messages: “stay away from the medics” or “don't vaccinate your children.” Perhaps the rumor of medics body snatching people in Guinea to sell their organs on the black market should not be understood as literally true, but rather as using a shocking story to make sure that the message of avoiding the medics sticks, and avoiding the medics can be a sensible thing to do given the infectiousness of the disease.
To summarize this section, rumors may or may not be good at conveying literally true information about the world—there are arguments on both sides of this debate. However, rumors, like much of our ordinary testimony, can do other work such as emphasising the all-things-considered judgment about what you ought to do.
But my Neighbors Know Nothing About Science
At this point, you might accept both that facts and values are often deeply entangled in the sciences, and that your neighbors' value judgments might be more likely to align with your own than those of scientists (or, at least, that scientists' values may be opaque). However, you might resist the conclusion that this can give you reason to trust rumors over more official sources of science communication. After all, there are two parts to the fact/value entanglement, and while my neighbors might be better at the values bit, they know nothing about the facts. How troubling this is to you, will depend on who you are as an agent, and what your aims and priorities are.
Presumably, it is not news to you that your neighbor is not a scientific expert. When you decide to trust their testimony over that of scientists, part of that decision may involve being willing to take an epistemic hit in order to make sure that your values are preserved. Consider the Jehovah's Witness, who is fully aware of the scientific facts around blood transfusions, but on value grounds refuses to allow their child to receive one. Similarly, if I am in the midst an Ebola outbreak, and I know that the information I am receiving from official sources of science communication is a messy tangle of facts and values, I may prioritize the values (if they are sufficiently important to me). There are anthropological reports from the Ebola case, and from West Africa more generally, that suggest that the values were very important in this case.
To see the importance of values in the Ebola case, consider just one issue that was both the subject of rumor and the site of severe resistance; the treatment of the bodies. Rumors were that families were unable to receive the bodies of their loved ones because they were stolen for the international organ markets (Fairhead, 2016, p. 10). Not being able to properly bury loved ones was a serious problem on value grounds; linked to beliefs about the fate of the deceased if they did not have a proper burial, and also to beliefs about what would happen to the community if they did not do this—failed crops would be one such consequence (Wiredu, 2010; Fairhead, 2016, p. 13). Note that there were good fact-based grounds for not returning the bodies—bodies remain extremely infectious for 3 days after death and the large funerals typical in the region were sites of infection for many (Fairhead, 2016, p. 8). Burials were a substantial point of resistance. Even Sierra Leone, which is taken to be amenable to Ebola interventions, had cases of bodies being buried in secret without notifying the official authorities of an Ebola death, or washing the body before the official burial teams arrived (Wilkinson and Fairhead, 2017, p. 23). In more extreme cases, there were reports of ambulances and medical teams being stoned in their efforts to collect bodies for burial (Wilkinson and Fairhead, 2017, p. 21). In one early case in Guinea, a family refused to give contact information to authorities, which was essential to establishing a chain of transmission, until appropriate burial arrangements had been made (Fairhead, 2016, p. 15). Given how important the values around burials are in West Africa, it is plausible that value- considerations would have trumped fact-considerations, regardless of the source of testimony.
However, that one may be taking values very seriously in an all-things-considered judgment about what to do, and perhaps prioritizing the values, doesn't mean that individuals discard the facts entirely, at least not always. The example of the Jehovah's Witness is meant only to show that sometimes people do prioritize values. And perhaps some in the Ebola crisis prioritized values too—perhaps some individuals received true official information about the risks of burial, which they accepted, and decided that the value of a traditional funeral service was more important to them. However, in other cases we see individuals grappling with the difficulty of doing both simultaneously.
One example of this is a recent study of decisions to vaccine or not amongst parents who are vaccine-hesitant (Peretti-Watel et al., 2019). This study shows both the reliance mothers (mothers were found to be the key decision makers on vaccination) have on their peer networks for information, and their concerns about getting the answer “right”. They weren't throwing facts to the wind in order to prioritize values. On the reliance on peers, the study reports:
But mothers did not decide alone: they all took advice from other women, including female relatives (mothers, sisters), female friends and neighbors, as well as mothers of their children's schoolmates. Other key information sources included physicians and the Internet (p. 1198).
Despite being reliant on peer networks for information, mothers were still worried about getting the answer “right.” This quote from a mother who did vaccinate her child, but was still concerned about the decision after the fact makes this concern clear: “I think I made the right decision…I hope I'm not wrong” (p. 1,197). She is worried about having made the right factual call, not the right value call.
Importantly, there are likely to be a range of actors with different fact/value priorities and different views about how to balance these. Some will accept official sources of science communication and act accordingly. Some will receive official science communication, accept its factual claims, but choose to act differently on value grounds. But there will also be others who are engaged in a more complicated balancing act between the facts and values—such as the mother who consults her neighbors about whether to vaccinate her child, but still worries about whether she has got it “right” even after the fact.
A final note for those who are squeamish about the thought of trusting your neighbor over that of the experts. That is, the account I offer here is not intended to be prescriptive. Rather, this is intended as a reasonable reconstruction of events. In John's (2011). Expert Testimony and Epistemological Free-Riding: The MMR Controversy, he argues that in certain cases, parents failing to vaccinate their children should be considered a form of epistemic free-riding. That is, if a parent is in a situation where they can't tell if vaccines are harmful or not, it would be individually rational for them not to vaccinate their children. The idea is that they would protect their child from any risk of harm, while still benefitting from the herd immunity of everyone else who has taken on the risk and vaccinated their children. John is not arguing that it is a good thing to avoid vaccination, he just provides an account of events that takes the agents seriously, and doesn't fall back on dismissing them as obviously irrational. Similarly, I am not arguing that it is a good idea to secretly bury your family members after an Ebola death, I am just providing an account that takes agents seriously as epistemic actors.
Conclusion
In this paper I have argued that rumors pose a challenge to effective science communication. This is especially clear in the case of the recent West African Ebola outbreak, where rumors drove resistance against interventions, and stalled containment. There is also evidence that this is a more wide-spread phenomenon, as can be seen in cases of rumors about AIDS in South Africa and vaccines in ultra-orthodox Jewish communities in New York. Further, I have argued that sometimes trusting rumors over expert testimony makes sense. The production and communication of science is heavily value-laden, and the values cannot be easily disentangled from the facts to check that they are acceptable. This is particularly troublesome when there are reasons to be suspicious of the values. My neighbor's values may be more transparent and trustworthy to me than the experts', making trust in rumors reasonable, especially in situations the values really matter, like they did in the Ebola outbreak. Rumors might not be excellent sources of literally true information about the world, but much of our ordinary testimony is similarly not literally true, but should be understood as doing other forms of communicative work.
Data Availability Statement
All datasets analyzed for this study are cited in the article/supplementary material.
Author Contributions
The author confirms being the sole contributor of this work and has approved it for publication.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Footnotes
1. ^That is, the measure of a specific white blood cell that fights infection. When an HIV positive person's CD4 count is lower than 200 they are diagnosed as having AIDS.
References
Abramowitz, S., McKune, S., Fallah, M., Monger, J., Tehoungue, K., and Omiduan, P. (2017). The opposite of deniali: social learning at the onset of the ebola emergency in Liberia. J. Health Commun. 22, 59–65. doi: 10.1080/10810730.2016.1209599
Adler, J. (2017). “Epistemological problems of testimony,” in The Stanford Encyclopedia of Philosophy, ed E. Zalta.
Ballantyne, N. (2019). Knowing our Limits. Oxford: Oxford University Press. doi: 10.1093/oso/9780190847289.001.0001
Bright, L. (2018). Du Bois' defence of the value free ideal. Synthese. 195, 2227–2245. doi: 10.1007/s11229-017-1333-z
Brown, M. (2018). Weaving value judgment into the tapestry of science. Philos. Theor. Pract. Biol. 10, 1–8. doi: 10.3998/ptpbio.16039257.0010.010
Chandler, C., Fairhead, J., Kelly, A., Leach, M., Martineau, F., Mokuwa, E., et al. (2015). Ebola, limitations in correcting misinformation. Lancet. 385, 1275–1277. doi: 10.1016/S0140-6736(14)62382-5
Coady, C. A. J. (2006). “Pathologies of testimony,” in The Epistemology of Testimony, eds W. Sosa, and J. Lackey (Oxford: Oxford University Press), 253–271. doi: 10.1093/acprof:oso/9780199276011.003.0012
Davis, E. (2017). Post-Truth: Why We have Reached Peak Bullshit and What We Can Do About It. London: Little Brown.
de Melo-Martin, I., and Intemann, K. (2012). Interpreting evidence: why values can matter as much as evidence. Perspect. Biol. Med. 59–70. doi: 10.1353/pbm.2012.0007
de Melo-Martin, I., and Intemann, K. (2018). The Fight Against Doubt: How to Bridge the Gap Between Scientists and the Public. Oxford: Oxford University Press. doi: 10.1093/oso/9780190869229.001.0001
Desclaux, A., Diop, M., and Doyon, S. (2017). “Fear and containment: contact follow-up perceptions and social effects in senegal and Guinea,” in The Politics of Fear: Médecins Sans Frontières and the West African Ebola Epidemic, eds M. Hofman and A. Au (New York, NY: Oxford University Press), 209–234. doi: 10.1093/acprof:oso/9780190624477.003.0012
Douglas, H. (2009). Science, Policy and the Value_free Ideal. Pittsburgh, PA: University of Pittsburgh Press. doi: 10.2307/j.ctt6wrc78
Elliot, K. (2017). A Tapestry of Values: An Introduction to Values in Science. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780190260804.001.0001
Epstein, S. (1996). Impure Science: AIDS, Activism and the Politics of Knowlegde Berkley; Los Angeles, CA: University of California Press.
Fairhead, J. (2016). Understanding social resistence to Ebola Response in the forest region of the republic of Guinea: an anthropological perspective. Afr. Stud. Rev. 59, 7–31. doi: 10.1017/asr.2016.87
Gelfert, A. (2013). Coverage-reliability, epistemic dependence, and the problem of rumour-based belief. Philosophia. 41, 763–786. doi: 10.1007/s11406-012-9408-z
Goldberg, S. (2010). Relying on Others: An Essay in Epistemology. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780199593248.001.0001
Goldenberg, M. (2016). Public misunderstanding of science: reframing the problem of vaccine hesitancy. Perspect. Sci. 24, 552–581. doi: 10.1162/POSC_a_00223
Goldman, A. (2011). Experts: which ones should you trust. Philos. Phenomenol. Res. 63, 85–110. doi: 10.1111/j.1933-1592.2001.tb00093.x
Halsey, E. (2016). An outbreak of fearsome photos and headlines: ebola and local newspapers in West Africa. Am. J. Trop. Med. Hyg. 95, 988–992. doi: 10.4269/ajtmh.16-0245
Hofman, M., and Au, S. (2017). “Introduction,” in The Politics of Fear: Médecins Sans Frontières and the West African Ebola Epidemic, eds M. Hofman and S. Au (New York, NY: Oxford University Press), 15–16. doi: 10.1093/acprof:oso/9780190624477.001.0001
John, S. (2011). Expert testimony and epistemological free-riding: the MMR controversy. Philos. Q. 244, 496–517. doi: 10.1111/j.1467-9213.2010.687.x
Jones, W. (2002). Dissident versus loyalist: which scientists should we trust. J. Val. Inquiry 36, 511–520. doi: 10.1023/A:1021945707032
Kahan, D. (2016). “The politically motivated reasoning paradigm, part 1: what politically motivated reasoning is and how to measure it,” in Emerging Trends in the Social and Behavioural Sciences, eds R. Scott and S. Kosslyn (Wiley), 1–16. doi: 10.1002/9781118900772.etrds0417
Kitcher, P. (2011). Science in a Democratic Society. New York, NY: Prometheus Books. doi: 10.1163/9789401207355_003
Kitta, A., and Goldberg, D. (2017). The significance of folklore for vaccine policy: discarding the deficit model. Crit. Public Health. 27, 506–514. doi: 10.1080/09581596.2016.1235259
Legault, A. (2017). Ebola Crisis Improving Science-Based Communication: Local Journalism in Emergency and Post-Outbreak Periods. World Federation of Science Journalists Report.
Longino, H. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Enquiry. Princeton, NJ: Princeton University Press.
McClimans, L. (2017). Place of birth: ethics and evidence. Topoi. 36, 531–538. doi: 10.1007/s11245-015-9353-0
McKenna, R. (2019). Irrelevant cultural influences on belief. J. Appl. Philos. 36, 755–768. doi: 10.1111/japp.12347
Peretti-Watel, P., Ward, J., Vergelys, C., Bocquier, A., Raude, J., and Verger, P. (2019). 'I Think I Made the Right Decision…I Hope I'm Not Wrong', Vaccine hestiency, commitment and trust among parents of young children. Soc. Health Illness. 41, 1192–1206. doi: 10.1111/1467-9566.12902
Reiss, J. (2017). Fact-value entanglements in positive economics. J. Econ. Methodol. 24, 134–149. doi: 10.1080/1350178X.2017.1309749
Seckinelgin, H. (2020). People don't live on the care cascade: the life of the HIV care cascade as an international AIDS policy and its implications. Glob. Public Health. 15, 321–333. doi: 10.1080/17441692.2019.1673784
Steele, K. (2012). The scientist qua policy maker makes value judgments. Philos. Sci. 79, 893–904. doi: 10.1086/667842
Steinberg, J. (2016). Re-examinining the early years of anti-retroviral treatment in south Africa: a taste for medicine. Afr. Affairs 116, 60–79. doi: 10.1093/afraf/adw026
Walsh, S., and Johnson, O. (2018). Getting to Zero: A Doctor and a Diplomat on the Ebola Frontline. London: Zed Books.
Wilkinson, A., and Fairhead, J. (2017). Comparison of social resistance to Ebola response in Sierra Leone and Guinea suggest explanations lie in political configurations not culture. Crit. Public Health. 27, 14–27. doi: 10.1080/09581596.2016.1252034
Wiredu, K. (2010). “Death and the afterlife in African culture,” in Person and Community: Ghanian Philosophical Studies, Vol. 1, eds K. Wiredu and K. Gyeke (Washington, DC: Council for Research in Values and Philosophy), 137–153.
Keywords: rumors, Ebola, science communication, values, social epistemology
Citation: Furman K (2020) On Trusting Neighbors More Than Experts: An Ebola Case Study. Front. Commun. 5:23. doi: 10.3389/fcomm.2020.00023
Received: 05 October 2019; Accepted: 23 March 2020;
Published: 16 April 2020.
Edited by:
Erin Jade Nash, University of New South Wales, AustraliaReviewed by:
Inmaculada De Melo-Martin, Cornell University, United StatesStephen John, University of Cambridge, United Kingdom
Copyright © 2020 Furman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Katherine Furman, katherine.furman@liverpool.ac.uk