Skip to main content

CONCEPTUAL ANALYSIS article

Front. Commun., 14 April 2021
Sec. Psychology of Language
This article is part of the Research Topic Language, Cognition, and the Manipulated Brain: Theoretical and Experimental Perspectives on Manipulative Processes in Language Comprehension View all 8 articles

Truthfully Misleading: Truth, Informativity, and Manipulation in Linguistic Communication

  • Laboratory of Cognitive Psychology, CNRS and Aix-Marseille University, Marseille, France

Linguistic communication is geared toward the exchange of information, i.e., changing the addressee's world views. In other words, persuasion is the goal of speakers and the force of the speaker's commitment as indicated in the utterance is an important factor in persuasion. Other things being equal, the stronger the speaker's commitment, the easier the persuasion. However, if deception is detected, the stronger the speaker's commitment, the harsher the punishment, i.e., the damage to his or her reputation. One way for cheaters to avoid detection and/or to mitigate punishment is to downplay their commitment to what they mean through the utterance by making its content less informative, i.e., by producing underinformative utterances. Underinformativity is also a powerful way of triggering context-dependent and inference-based interpretation that goes beyond what is said. This allows speakers to indirectly communicate false content while producing an utterance that is literally true. This phenomenon of truthfully misleading is the topic of the present paper. As will be seen, it allows speakers to leave part of the responsibility for the false content to their hearers, with the triple effect that they can claim to have been misunderstood (plausible denial), claim that what they said was literally true, and explain the underinformativity of the utterance through ignorance.

Introduction

For rather obvious reasons, given the levels of false information peddled by heads of states as well as advertisers, communicating false information (dubbed “fake news”) has been very much in the media. So much so, indeed, that some linguists have begun to take an interest, looking at the linguistic forms that it has taken (see e.g., Lakoff, 2004; Lakoff and Wehling, 2012 on political discourse, and Sedivy and Carlson, 2011, on advertisers' discourse). On the opposite side, so to speak, one finds books written by political advertisers on how politicians should choose their words to make the public accept and/or support decisions that are detrimental to them (see e.g., Luntz, 2007).

Beyond politics and advertising, however, the problem of the reliability of others' reports goes much deeper. Indeed, most of human knowledge and beliefs are acquired second-hand, through others' reports (Lackey, 2008). As pithily put by Solan and Tiersma (2005, 2): “Even beliefs as central to our identity as those regarding our own name, where and when we were born, and our family history were acquired at second-hand, from others.” This huge proportion of information acquired through linguistic communication naturally rises the question of how reliable the process is, a question that is central in the comparatively recent philosophical field of epistemology of testimony (see, notably, Lackey and Sosa, 2006; Lackey, 2008; Shieber, 2015).

While there are different views about when knowledge can and cannot be acquired through testimony (roughly others' reports), one of the most interesting is Lackey's (2008) dualist view, according to which the speaker and the hearer share responsibility in the process, “the former through the reliability of her statement and the latter for her positive reasons” (Lackey, 2008, 2) for accepting the statement as true. This is echoed by the recent development of the notion of epistemic vigilance proposed by Sperber et al. (2010). One way of making psychological sense of Lackey's term, “positive reasons,” is to consider that when the hearer has put the statement concerned through the mechanisms of epistemic vigilance without detecting any reason for doubt, she is entitled to consider it reliable. Additionally, as pointed out by Lachmann et al. (2001), one way of ensuring honesty in communication is to make dishonesty costly, and, in human communication, the cost is to the cheater's reputation (cf. the notion of economy of esteem, developed by Brennan and Pettit, 2004), in the form of a diminution of the hearer's trust in him or her. Thus, epistemic vigilance has a double function: (1) Culling false information; (2) Acting as a deterrent to would-be cheaters.

The basic idea, then, is that, to preserve one's reputation as a reliable speaker, one shouldn't be caught in a lie. Obviously, that can be done by not lying, i.e., by being sincere. But, equally, it might be done by avoiding detection even though one is communicating false information with the intention of deceiving one's hearer. In fact, one generally recognized possibility for communicating false information while avoiding being detected in a lie is to mislead, rather than to lie stricto sensu. This is the topic I want to pursue here: how some forms of misleading allow speakers to communicate unreliable or false information while eschewing detection as liars or as incompetent speakers.

Communicating False Information: Manipulating, Lying, Misleading and Eschewing Epistemic Vigilance

The main goal of this paper is to explain how communicating false information through implicature, by being less informative than one could be, allows one to eschew epistemic vigilance and the consequences of being detected in a lie. As a preliminary, I would like to begin with a few conceptual clarifications. Then I will introduce the notion of epistemic vigilance and the different forms of linguistic deception, before turning to the notion of truthfully misleading and to the advantage of practicing truthfully misleading for dishonest speakers.

Conceptual Clarifications

One central notion that has been used very freely in an informal way in the literature is manipulation and it is central to the present Research Topic. Manipulation is one among a number of notions—cooperation, collaboration, altruism, exploitation, free-riding—that resort to the social sphere. They can all be approached (in a spirit akin to the theory of evolution) in terms of cost and benefit. Cost and benefit, in turn, when considered in the context of linguistic communication, are cashed out relative to the speaker and the hearer. Very roughly one can define a cooperative linguistic exchange as one in which both participants benefit, an altruistic linguistic exchange as one in which the speaker bears the cost while the hearer benefits, an exploitative or free-riding linguistic exchange as one in which the speaker benefits while the hearer bears the cost. From that vantage point, one would then define manipulation in linguistic communication as occurring when the speaker benefits by inducing the hearer through communication to do something (including believe something) that the hearer wouldn't otherwise have done. It is important to note that manipulation, while it must benefit the speaker, needs not be costly to the hearer. In other words, it needs not be exploitative (for a more detailed discussion, see Reboul, 2017). It may be either cooperative or exploitative. I will be interested here in the exploitative side of manipulation that also involves deception.

An interesting point is how manipulation can be successful. Basically, humans resent being manipulated and one condition for the success of a manipulative act is that it should not be detected as manipulation. This is obvious when it is accompanied by deception, but it is true much more broadly. Given that my interest here is in exploitative and deceptive manipulation, detection and how to avoid it will play a central role in what follows.

Epistemic Vigilance

The notion of epistemic vigilance was introduced by Sperber et al. (2010) and has been developed in numerous works since, notably by Mercier (see e.g., Mercier and Sperber, 2017; Mercier, 2020). Sperber and his colleagues' approach was original in a time in which the fast spreading of rumors, occasionally of a conspiracy nature, on the social networks was seen as a sign of human gullibility, worsened by the basic errors in human reasoning evidenced in experimental psychology (see e.g., Kahneman and Tversky, 2000; Kahneman, 2013). One of Sperber and Mercier's innovations lay in reversing the tables, linking the specificities of human reasoning to human communication. Their central idea was that human reasoning is geared toward the production and evaluation of reasons, those that one produces for persuading others and those that others produce to persuade one. In other words, human reasoning evolved for communicating persuasively and for defending oneself against manipulative communication through epistemic vigilance. The link between apparent errors in reasoning and linguistic communication had already been investigated by Sperber et al. (1995), where the authors showed the involvement of relevance considerations in the erroneous choices participants made in the Wason Selection Task. The present theory goes further in linking the specificities of human reasoning to persuasive strategies and epistemic vigilance.

Epistemic vigilance is, up to a point, a development of Relevance Theory (see Sperber and Wilson, 1995) and a further departure from the orthodoxy of the Gricean view of human communication (see Grice, 1989). Communication, rather than being cooperative in that the speaker would obey the maxims, notably the Maxim of Quality (“Do not say what you believe to be false. Do not say that for which you lack adequate evidence”), can be strategic (see also Pinker, 2007; Pinker et al., 2008; Lee and Pinker, 2010; Asher and Lascarides, 2013). This raises the possibility of linguistic communication being used deceptively rather than honestly, and with that possibility comes the question of how the honesty of communication can be guaranteed to the audience. This question is not specific to human communication and has been the object of a lot of work in the field of the evolution of communication. It has led to the Handicap Principle, proposed by Zahavi and Zahavi (1997), the hypothesis that the communicator bears a cost when communicating, this cost being a guarantee of her honesty to her audience. The notion has been developed over the years and a central reformulation is that it is not necessarily the production of the message that is costly in itself. Rather, there is a differential cost between honest and dishonest communication: While honest communication can be cheap or even cost-free, dishonest communication is produced with a potential cost. Regarding human linguistic communication, lying is cognitively more demanding than telling the truth (see e.g., Van Bockstaele et al., 2012). Additionally, as Lachmann et al. (2001) have argued, the price of dishonesty is a loss of reputation, leading to a loss of trust from the audience, if the deception is detected. Here, we will only be concerned with the second type of cost.

This is where Epistemic Vigilance comes in. As Sperber et al. (2010) have argued, in conversation, interlocutors will assess a message on two counts: its source and its content. Depending on what the hearer knows about the speaker, her trustworthiness and competence, he will be more or less ready to accept as true the information that she communicates. Regarding the content, the hearer will assess it relative to his own knowledge and beliefs, looking for contradiction and inconsistency. Clearly, if the speaker has a history and/or a reputation for dishonest communication or for incompetence regarding the topic of her message, she will not be able to convince her audience as easily as she would if she had a lily-white reputation for honesty and competence. There is slightly more, however: the very possibility of manipulation leads hearers to what Mercier and Sperber (2017) call the myside bias (otherwise known as the confirmation bias). This is basically the tendency to favor one's own opinions or one's own inferences over information supplied by others, a tendency that, as we will see, can be exploited by speakers to convince their hearers.

In other words, as a communicator, being seen as trustworthy, as both honest and competent, is crucial when one wants to convince one's audience to adopt one's reasons and discard their own. This puts a premium on a reputation of trustworthiness. So, let us turn to the types of deception that occur in linguistic communication and see what implications they have for detection.

Linguistic Deception: Lying and Misleading

Obviously, the central cases of linguistic deception are lies. There has been some controversy in the philosophical literature as to the precise definition of a lie, most of it centering on whether the definition should or shouldn't include an intention to deceive (see Carson, 2010, and, for a very good overview, Mahon, 2019). Here, I will take it for granted that a liar has an intention to deceive her hearer. What is more central for my present purpose is the difference between lying and misleading. In her book on the subject, Saul (2012) notes that there are two main differences between lying and misleading (see also Stokke, 2013 for a very similar view). The first one is that “misleading” is a success term while “lying” is not. You cannot say that you have mislead your hearer unless he ends up with a false belief as a result of your communicative act, but you can say that you have lied to your hearer, even though he doesn't believe what you have told him. The second difference is that, while lying has to be deliberate, the result of an intention on the speaker's part, misleading can be involuntary. While saying something that is and that you believe to be true, you may unwittingly lead your hearer to infer and believe something false. While I will only be interested here in cases of misleading that are deliberate, involving an intention to deceive, this second difference between misleading and lying will be central in what follows. Finally, as noted by Mahon (2019), lying is usually considered from a moral perspective as worse than merely misleading. This last point has to do with the different ways in which the false information is communicated in lying and misleading, a point to which I will turn now.

Lying and Assertion

While in lying, the false information is conveyed through an explicature of the utterance, in misleading, it is conveyed through indirect communication, being either presupposed or implicated (on the notion of an explicature, see Sperber and Wilson, 1995; Carston, 2002). In more conventional terms, lying has been considered as intimately linked to assertion. Indeed, Dummett (1981) holds that only creatures that can assert can lie, while misleading is obviously not subject to the same restriction. The reason why this is linked to a greater moral condemnation of lies is that assertion commits the speaker if not to the truth of what she is saying, at least to a belief in its truth. This leads to the idea that lying involves a dual intention to deceive (see Mahon, 2019): as to the truth of what the speaker asserts and as to the reality of the speaker's commitment to it. By contrast, the speaker who merely misleads is only guilty of the first deception, as she doesn't commit herself to anything by implicating, not even to the fact that she is implicating anything, and here there is a major difference between presupposition and implicature, to which we now turn.

Truthfully Misleading and the Difference Between Presupposing and Implicating

While both presupposing and implicating are ways of indirectly communicating (see Masia, 2017 for a sociological account) and, at least occasionally, of misleading, they work very differently. Let us look at two examples:

1. A: I have decided that John would be an excellent choice for the post of branch manager.

B: Good idea! Especially now that he has stopped drinking.

2. A: Has Peter finished his homework?

B: Well, he has done some exercises.

(1) is a paramount example of presupposition. The part of B's answer that is italicized asserts that John does not drink now, but it presupposes that he drank, as both its negative and interrogative forms still convey this content (see, e.g., Strawson, 1950, Chierchia and McConnell-Ginet, 1990). (2) is an example of a scalar implicature. B's answer asserts that Peter did some of his homework (an assertion which will be true whether he has done only part of it or all of it), but it implicates that Peter has done only part of it. So, both (1) and (2) indirectly communicate something different from what they assert. But they do so in very different ways that interestingly allow both to eschew epistemic vigilance in equally different ways.

In presupposition, as the presupposed content is not asserted, it is not in the forefront of the hearer's mind, and, unless it gives rise to a major cognitive dissonance (the so-called “Hey, wait a minute” effect, see von Fintel, 2004), chances are that it is the main content of the utterance that will be assessed for contradiction or inconsistency, allowing the presupposed, not-at-issue content to sneak in, unremarked. However, this does not mean that the speaker does not commit herself to the truth of the presupposed content. First, if it turns out that this content is false, this impacts the truth-value of the main content (the traditional view, see Strawson, 1950, is that the falsity of the presupposition makes the main content neither true nor false). Second, the speaker cannot claim that she didn't intend to communicate the presupposed content.

The situation is very different with implicature. First, implicatures are cancellable (see Grice, 1989), as shown by the fact that there is no contradiction in uttering “Peter has done some exercises, and even all of them.” In other words, the speaker can claim that she did not mean to communicate the implicature. This entails, if taken at face value, that the speaker did not intend to mislead or deceive her hearer. Third, the speaker may point out, rightly, that, indeed, what she asserted is true. It is this phenomenon of truthfully misleading that will occupy us in the remainder of this paper.

Truthfully Misleading, Quantity Implicatures and Informativity

How is it that the speaker can deceive while saying something true? Let us go back to example (2) above. In response to A's question about whether Peter has done his homework, B answers that he has done some exercises, triggering the so-called scalar implicature that Peter has not done all of his exercises (and, hence, not all of his homework). However, B's answer does not literally say anything of the sort. Rather it can be interpreted in either of two different ways. In its semantic (literal or logical) interpretation, B's utterance can be interpreted as saying that Peter has done some and maybe all of his homework. By contrast, its pragmatic interpretation (the scalar implicature) excludes the possibility that Peter has done all of his exercises. In other words, it has the content that Peter has done only some of his exercises and, hence, only some of his homework. Scalar implicatures of this kind are the epitome of so-called Quantity implicature.

In his Logic and conversation, Grice (1989) proposed a general Principle of Cooperation, declined in a number of Maxims, to account for various, non-literal, forms of utterance interpretations. The idea is that the hearer takes it for granted that the speaker respects the maxims and that, when the literal content of the utterance seems to contradict this assumption, the hearer will access, through inferential processes, an interpretation consistent with the respect of the maxims. Only two of the maxims are relevant here, the Maxim of Quality (see section Epistemic Vigilance) and, more specifically, the Maxim of Quantity. It consists of two submaxims: (1) Make your contribution as informative as is required (for the current purposes of the exchange); (2) Do not make your contribution more informative than is required. It is the first submaxim that is operative in the derivation of the scalar implicature for B's reply in (2). The central term in the submaxim is the term informative.

Information can be defined (informally), as proposed by Shannon and Weaver (1949), as a reduction of uncertainty. Relative to Grice's submaxim, there are two further complications. First of all, while information is a binary notion (something is or is not informative), informativity is a comparative and gradual notion; second, informativity is relative to the current purposes of the exchange, introducing a modicum of context-dependency to the notion. Both play an important role in the derivation of a quantity implicature, such as that in 2. Additionally, the fact that informativity is a comparative notion suggests that alternatives to the utterance actually produced have to be considered, rising the question of how these alternatives are determined. Returning to example (2), the term some is crucial to the process of the derivation of the utterance, as we will now see.

The idea behind the very term scalar implicature is that some Quantity implicatures arise from the use of a term (here some) that belongs to a scale (here < all, some>) and is less informative than other terms in the same scale. The scale thus determines the set of alternatives that the speaker could have used but chose not to use. Informativity is defined on the basis of asymmetrical entailment. For the scale < all, some>, all asymmetrically entails some in the sense that any situation that verifies all will also verify some, while the reverse is not true: there are situations that verify some but falsify all. This has both semantic consequences (having to do with the lexical meaning of some) and pragmatic consequences (having to do with its use in conversation). Thus, while semantically some means some and maybe all, it can be pragmatically used to mean only some. This is because some, being asymmetrically entailed by all, is less informative than all (see Horn, 2004).

Thus, a speaker who uses some rather than all chooses to be more vague, less precise and less informative than she would have been if she had chosen to use all. In Gricean terms, the choice of some flouts the first submaxim of Quantity (because some is underinformative relative to all). To restore the notion that the speaker did respect the submaxim of Quantity, the hearer infers that the speaker could not, in the circumstances, have used all. While most analyses only explicitly invoke the Maxim of Quantity, it should be clear that the Maxim of Quality is also involved. Indeed, it is only if it is involved that the derivation of the implicature makes sense. Thus, supposing that the speaker is truthful, there are two reasons why she could have chosen to use the underinformative some rather than all: (1). She knows that Peter has not done all his exercises (the scalar interpretation); (2). She doesn't know whether he has done all vs. only some of his exercises (the so-called ignorance implicature). Note that while the scalar implicature restores informativity, this is not the case for the ignorance implicature. Nevertheless, both allow the hearer to believe that the speaker has respected both Quantity and Quality.

So, briefly, when she uses a Quantity implicature to convey misleading information, the speaker does so by producing an underinformative utterance which is true. She does not commit to the truth of either the scalar or the ignorance implicature, but only to the truth of the asserted content of the utterance. This has two advantages for the speaker: (1). It allows her to deny having had the intention to communicate the (false) scalar implicature (in other words, it allows her to deny having had an intention to deceive her hearer); (2). It allows her to invoke ignorance (rather than being accused of deceit) as an explanation for using an underinformative utterance. Thus, truthfully misleading would be a good choice for any speaker with deceptive intent.

There are however some drawbacks to using indirect communication. For one thing, it has been experimentally shown that commitment increases persuasiveness (see, e.g., Vullioud et al., 2017). The more the speaker appears committed to her message, the more easily she will convince her hearer to believe her. Thus, lying (through assertion) should be more successful in persuasion, all things being equal, than misleading, given the difference of commitment between assertion and implicature. However, there is a further advantage for misleading rather than lying communicators. Here we should go back to the myside bias (Mercier and Sperber, 2017; see above section Epistemic Vigilance). In a nutshell, the hearer will privilege information that he has accessed himself, either directly, for instance through perception, or indirectly through inference, over information offered by others. In implicatures, the speaker does not explicitly communicate the implicated content: It is the hearer that has to derive it through inference. Thus, the hearer is responsible for the implicated content in a way that he isn't for explicitly communicated content. This means that communicating through implicatures, while it may be in some ways less convincing than communicating directly, nevertheless at least partly makes up for it by making the hearer responsible for inferring the content. While the speaker does not commit herself to the implicated content, or at least not to the same degree that she would commit herself to explicitly communicated content, the responsibility of the hearer in deriving the content helps persuade him of its validity. So, it would seem that implicating holds its own in the deception game. Still, communicating through implicatures is not risk free: not only because the hearer may fail to be convinced by a content that the speaker is not committing herself to, but because, in addition, he may just fail to draw the implicature. So why do speakers choose to mislead rather than to produce outright lies when they intend to deceive their hearers?

Plausible Denial and the Ignorance Implicature

From 2007 on, Pinker (see Pinker, 2007; Pinker et al., 2008; Lee and Pinker, 2010) has come up with a strategic theory of why people use implicatures. While his theory does not center on deception, he clearly departs from Gricean orthodoxy in noting from the outset that speakers and hearers' interests may diverge (a position similar to that adopted by Sperber for his notion of epistemic vigilance; see Sperber et al., 2010 and section Epistemic Vigilance above) and that this is an incentive for speakers to try and manipulate their hearers. However, to eschew punishment for manipulation, they have to manipulate in such a way that they can deny having had a manipulative intention. Just as Lachmann et al. (2001), Pinker and his colleagues adopt an evolutionary approach based on game theory. Pinker uses examples that, rather than being deceptive stricto sensu, are euphemisms used in attempts to bribe authority figures. The central example is that of a driver caught over-speeding by a policeman. The driver doesn't want to pay the fine. On the face of it, he has a binary choice (to bribe or not to bribe). If he chooses not to bribe, he will pay the fine. If he chooses to bribe, the success or failure of his attempt will depend on whether the policeman is honest. If the policeman is dishonest, the driver benefits in that he will only have to pay a small bribe rather than a hefty fine. If, on the other hand, the policeman is honest, the driver will bear the heavy cost of both paying the fine and going to jail. As Pinker points out, however, the driver is not limited to this binary choice, as he can attempt to bribe the policeman through implicit communication. A driver who attempts to bribe a policeman in such an implicit way will either benefit through not paying the fine or only bear the cost of the fine without the additional penalty of going to jail, as he can deny his intention to bribe the policeman.

While there is no intention to deceive in Pinker's examples, they nevertheless share with examples of truthfully misleading the intention of the speaker not to bear the penalty for what she is trying to do. As Pinker points out, with implicit communication, the speaker can plausibly deny her intention. In other words, relative to truthful misleading, even if the implicitly communicated content is found to be false, the deceptive intention of the speaker can nevertheless escape detection, or at least leave room for doubt. The speaker can legitimately point out that this content is not part of what she said, and that what she said is, in fact, true. And she can add that she did not intend to communicate the implicit content at all. An obvious objection to plausible denial would seem to be that, in some cases at least, the implicit content is so obvious that the speaker cannot plausibly deny having had the intention to communicate it. This, however, is not quite correct, and, here, the so-called ignorance implicature comes in.

In a recent paper, Egre and Icard (2019) discuss the links between lying and vagueness. Indeed, one further way of looking at the respective degrees of informativity of some and all is to go through the notion of vagueness, as it was analyzed by Russell (1923), because, while Russell does not use the term informativity, it is nevertheless this notion that underlies his view of vagueness. Russell approaches vagueness in language in terms that are very similar to those used for asymmetric entailment, i.e., in terms of truth and truth-values. Relative to the same object, a sentence is more precise (or less vague) if it is true in a more limited number of circumstances than is another, less precise (or more vague) sentence. This is because the truth-conditions of a precise sentence are more demanding or more restrictive than those of a vague sentence. For instance, the sentence “The flag is blue and white” is more precise than the sentence “The flag is blue,” because there are less flags that will verify the first than the second. And, obviously, all the flags that will verify the first will also verify the second (a flag cannot be blue and white without being blue). Thus, vagueness is directly linked to informativity. Relative to the same topic, a vague utterance is less informative than its precise counterpart. This, as we have seen, allows speakers to truthfully mislead their audience. Beyond this, as noted by Egre and Icard, communicating vaguely allows speakers to plead their own ignorance to explain why they did not produce a more informative utterance. In other words, speakers can point out that they did not want to assert that for which they lack adequate evidence. Indeed, at least in some cases, the hearer himself can come up with the ignorance implicature to maintain the idea that the speaker was complying with both Quality and Quantity. Thus, it is because utterances giving rise to Quantity implicatures are underinformative and hence vague that they offer such opportunities to the manipulative speaker. As noted by Egre and Icard, scalar implicatures are far from the only underinformative or vague utterances. What is more, only some Quantity implicatures only lead to ignorance implicatures. Here is an example:

3. A: Where does Anne live?

B: Somewhere in South Burgundy, I believe.

Contrary to what occurs, e.g., in (2) and in (3), B's answer does not lead to an implicature excluding possibilities (and thus enhancing informativity), it only leads to an ignorance implicature. In other words, in some cases, underinformativity or vagueness cannot be remedied, leading only to the assumption that the speaker cannot be more informative because she doesn't have the relevant information. Note that, in such cases, the ignorance implicature itself may be false and that, when this occurs, the underinformative utterance is an instance of truthfully misleading.

The Scope of Informativity and Exhaustivity

Let us now return to cases in which underinformativity can lead to an implicature excluding some possibilities and thus restoring informativity, as scalars do. As said above (see section Truthfully Misleading, Quantity Implicatures and Informativity), scalar implicatures work by excluding alternatives to what the speaker actually said, leading to an interpretation that can be paraphrased with only. Chierchia (2013) proposed that the scalar interpretation comes from a covert exhaustivity operator semantically equivalent to only (I will not discuss here the controversy regarding how scalar implicatures are derived; see Zufferey et al., 2019 for an overview). Such exhaustivity interpretations go much farther than scalar implicatures, encompassing focus and cleft-sentences, among other constructions (see Falaus, 2013). Interestingly, such phenomena are taken to depend on alternatives, though, in such cases, these are contextually determined, largely through the Question Under Discussion (QUD, see Roberts, 2012). Without going into technical details, the idea is that the set of alternatives corresponds to possible answers to the question. To take an example:

4. A: Who came to Mary's birthday?

B: John and Paul.

If the set of alternatives included Belinda and Samantha in addition to John and Paul, B's answer excludes Belinda and Samantha from the people who attended Mary's birthday, supposing B to be knowledgeable and sincere. Again, this can be used to truthfully mislead, saying something true (John and Paul came) to implicate something false (Belinda and Samantha did not). Again, the speaker can deny meaning this and/or can invoke ignorance.

But the phenomenon goes much farther than a simple review of potential linguistic constructions might suggest, as the following authentic example, borrowed from Solan and Tiersma (2005: 213), shows:

5. Q: Do you have any bank accounts in Swiss banks, Mr. Bronson?

A: No, Sir.

Q: Have you ever?

A: The company had an account for about six months in Zurich.

While Bronson's second answer is perfectly true, it is less than candid, falsely implicating that Bronson did not ever have a personal Swiss bank account.

Thus, truthfully misleading using underinformative or otherwise vague utterances is a potentially widespread phenomenon, offering deceptive speakers a great tool to avoid detection and/or avoid punishment.

Discussion

To sum up, by allowing speakers to modulate the informativity of their utterances, language offers speakers a way to mislead their audiences without saying anything false. Rather, by saying something true but underinformative, speakers can induce their hearers to make inferences that are false. If the falsity of these inferences is detected, speakers can defend themselves by (1). Pointing out that what they said was true and that they didn't intend to communicate the further content that the hearer took upon himself to infer (implicature cancellation and plausible denial); (2). That they weren't more informative because they did not have more information than what they communicated (ignorance). While hearers can be more easily persuaded by confident speakers who assert without hedging and thereby commit themselves to the truth of the asserted content, truthfully misleading can nevertheless benefit from the myside bias, given that the implicit content is something for which the hearer is partly responsible. On the other hand, the lack of commitment for the misleading content that goes with implicit communication is precisely what allows a misleading speaker to defend herself through plausible denial.

To begin with this last point, let us compare example (5) above (see section The Scope of Informativity and Exhaustivity) with example (6) below (borrowed from Egre and Icard, 2019, 354):

6. French Minister of Budget, Jerome Cahuzac, on December 5, 2012, in answer to a question in the French parliament:

“I do not have, Mr. Deputy, I never had, any account in a foreign country, neither now or previously.”

As was quickly discovered, Cahuzac held bank accounts in Switzerland, Singapore and the Isle of Man. By contrast with Bronson in (5) (who was being interrogated by a judge during his trial for tax evasion), Cahuzac couldn't claim to have had no intention of conveying the asserted information. Indeed, the different ways in which Cahuzac and Bronson chose to communicate the false information allowed the second to avoid a further charge of perjury1. There are many other examples of politicians using truthful misleading to eschew perjury charges (the most notable case being President Clinton relative to his relationship with Monica Lewinsky: for a discussion, see Saul, 2012).

I would like to go beyond anecdotal examples, however revealing they may be, and turn to the, still meager, experimental evidence. There are two main papers that come to mind. The first, by Vullioud et al. (2017), looks at how persuasive commitment can be, comparing confident speakers to hedging speakers. It also compares the toll on trust when it turns out that the content is false for confident vs. hedging speakers. The second, by Mazzarella et al. (2018) compares the lack of trust resulting from the detection of a false content in three forms of utterances: assertion, presupposition and implicature.

The summary at the beginning of this section would suggest the following hypotheses:

H1. The more committed the speaker, the more persuasive she will be.

H2. The more committed the misinforming speaker, the more likely her punishment.

H3. Speakers who misinform through assertion and presupposition will suffer a greater loss of trust than speakers who mislead through implicatures.

Let's begin Vullioud et al. (2017). Their study consists of four experiments, with scenarios in which the participant is asked to imagine herself in a situation where she has to rely on two unknown people (senders) to get the answer to an important question. One of them answers confidently (confident sender), while the other hedges his answer (unconfident sender). Participants are then asked which sender they would trust. Once they have given their choice, they are told which of the sender, if any, was right (in some cases, both senders are wrong). They are then asked which sender they would punish and which sender they would ask advice from. The different experiments varied the truth of the answers and the cost of a wrong choice to the participants. The results were clear: participants decided to trust the confident sender significantly more often than the unconfident sender. They also choose to punish the confident sender significantly more often than the unconfident sender when both had given wrong answers. And finally, they decided to trust again the unconfident sender significantly more often than the confident sender when both had given a wrong answer.

While the experiments are not directly related to truthful misleading, they do show that commitment is persuasive, as well as that the responsibility of speakers, as shown by punishment and a decrease in trust, is not engaged to the same degree if the speaker is perceived as less committed.

The second study, by Mazzarella et al. (2018), is more directly linked to the present purpose. The study includes three experiments, following Vullioud et al.'s (2017) paradigm. The difference was that rather than only confident vs. nonconfident utterances, they also used implicatures and presuppositions. Their results were similar to those of Vullioud et al.'s regarding confident/nonconfident senders. Regarding implicatures, the results (collected in the first and third experiments) showed that, regarding both punishment and trust, participants punished significantly more and were significantly less ready to trust speakers who asserted false information than speakers who merely implicated it. Finally, regarding presupposition (tested in experiments 1 and 3), the results patterned with those for assertions. In other words, whether the content is asserted or presupposed, the participants considered that the speaker was similarly committed to it.

Considering that, while presupposition is not part of the main content (it is not-at-issue content), the truth-value of the presupposed content impacts the truth-value of the main content, this is not surprising. If the falsity of the presupposition entails a lack of truth-value or the falsity of the whole, then the speaker can hardly commit herself to the truth of what she says without committing herself to the truth of the presupposition. On the other hand, things are quite different regarding implicatures.

If we go back to example (2)., as said above (see section Truthfully Misleading and the Difference Between Presupposing and Implicating), the truth-value of the asserted content does not depend on the truth-value of the implicated content. The implicated content can be false while the asserted content is still true. This is what makes truthful misleading possible. And it is because of this that the speaker's commitment to the truth of the asserted content does not commit her to the truth of the implicated content.

Thus, to conclude, truthfully misleading, because it is based on underinformativity or vagueness, allows cheaters to escape punishment by denying having had an intention to communicate the implicated content (plausible denial) and/or by pleading ignorance.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^Note, though, that Bronson could only use plausible denial, but not invoke ignorance.

References

Asher, N., and Lascarides, A. (2013). Strategic conversation. Seman. Prag. 6, 1–62. doi: 10.3765/sp.6.2

CrossRef Full Text | Google Scholar

Brennan, G., and Pettit, P. (2004). The Economy of Esteem: An Essay on Civil and Political Society. Oxford: Oxford University Press. doi: 10.1093/0199246483.001.0001

CrossRef Full Text | Google Scholar

Carson, T. L. (2010). Lying and Deception: Theory and Practice. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780199577415.001.0001

CrossRef Full Text | Google Scholar

Carston, R. (2002). Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell. doi: 10.1002/9780470754603

CrossRef Full Text | Google Scholar

Chierchia, G. (2013). Logic in Grammar: Polarity, Free Choice and Interpretation. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780199697977.001.0001

CrossRef Full Text | Google Scholar

Chierchia, G., and McConnell-Ginet, S. (1990). Meaning and Grammar: An Introduction to Semantics. Cambridge, MA: The MIT Press.

Google Scholar

Dummett, M. (1981). Frege: Philosophy of Language, 2nd edition. London: Duckworth.

Google Scholar

Egre, P., and Icard, B. (2019). “Lying and vagueness,” in The Oxford Handbook of Lying, ed. J. Meibauer (Oxford: Oxford University Press), 354-69. doi: 10.1093/oxfordhb/9780198736578.013.27

CrossRef Full Text | Google Scholar

Falaus, A. (2013). Alternatives in Semantics. Basingstoke; New York, NY: Palgrave Macmillan. doi: 10.1057/9781137317247

CrossRef Full Text | Google Scholar

Grice, H. P. (1989). Studies in the Way of Words. Cambridge, MA: Harvard University Press.

Google Scholar

Horn, L. R. (2004). “Implicature,” in The Handbook of Pragmatics, eds. L. R. Horn and G. Ward (Oxford: Blackwell Publishing), 3-28. doi: 10.1111/b.9780631225485.2005.00003.x

CrossRef Full Text | Google Scholar

Kahneman, D. (2013). Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.

Google Scholar

Kahneman, D., and Tversky, A. (2000). Choices, Values and Frames. New York, NY; Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511803475

CrossRef Full Text | Google Scholar

Lachmann, M., Szamado, S., and Bergstrom, C. T. (2001). Cost and conflict in animal signals and human language. PNAS 98, 13189–13194. doi: 10.1073/pnas.231216498

PubMed Abstract | CrossRef Full Text | Google Scholar

Lackey, J. (2008). Learning from Words: Testimony as a Source of Knowledge. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780199219162.001.0001

CrossRef Full Text | Google Scholar

Lackey, J., and Sosa, E. (Eds). (2006). The Epistemology of Testimony. Oxford: Clarendon Press. doi: 10.1093/acprof:oso/9780199276011.001.0001

CrossRef Full Text | Google Scholar

Lakoff, G. (2004). Don't Think of an Elephant! Know Your Values and Frame the Debate. White River Junction: Chelsea Green Publishing Company.

Google Scholar

Lakoff, G., and Wehling, E. (2012). The Little Blue Book: The Essential Guide to Thinking and Talking Democratic. New York, NY: Free Press.

Google Scholar

Lee, J. J., and Pinker, S. (2010). Rationales for indirect speech: the Theory of the Strategic Speaker. Psychol. Rev. 117, 785–807. doi: 10.1037/a0019688

PubMed Abstract | CrossRef Full Text | Google Scholar

Luntz, F. (2007). Words that Work: It's Not What You Say, It's What People Hear. New York, NY: Hyperion Books.

Google Scholar

Mahon, J. E. (2019). “Contemporary approaches to the philosophy of lying,” in The Oxford Handbook of Lying, ed. J. Meibauer (Oxford: Oxford University Press), 32-55. doi: 10.1093/oxfordhb/9780198736578.013.3

CrossRef Full Text | Google Scholar

Masia, V. (2017). A sociological account of indirect speech. Inter. Stud. 18, 142–160. doi: 10.1075/is.18.1.07mas

CrossRef Full Text | Google Scholar

Mazzarella, D., Reinecke, R., Noveck, I., and Mercier, H. (2018). Saying, presupposing and implicating: how pragmatics modulates commitment. J. Prag. 133, 15–27. doi: 10.1016/j.pragma.2018.05.009

CrossRef Full Text | Google Scholar

Mercier, H. (2020). Not Born Yesterday. A New Theory of Human Understanding. Princeton: Princeton University Press.

Google Scholar

Mercier, H., and Sperber, D. (2017). The Enigma of Reason. Cambridge, MA: Harvard University Press.

Google Scholar

Pinker, S. (2007). The evolutionary social psychology of off-record indirect speech acts. Inter. Prag. 4, 437–465. doi: 10.1515/IP.2007.023

CrossRef Full Text | Google Scholar

Pinker, S., Nowak, M. A., and Lee, J. J. (2008). The logic of indirect speech. PNAS 105, 833–838. doi: 10.1073/pnas.0707192105

PubMed Abstract | CrossRef Full Text | Google Scholar

Reboul, A. (2017). Communication and Cognition in the Evolution of Language. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780198747314.001.0001

CrossRef Full Text | Google Scholar

Roberts, C. (2012). Information structure in discourse: Towards an integrated formal theory of pragmatics. Sem. Pragm. 5, 1–69. doi: 10.3765/sp.5.6

CrossRef Full Text | Google Scholar

Russell, B. (1923). Vagueness. Aust. J. Psychol. Phil. 1, 84–92. doi: 10.1080/00048402308540623

CrossRef Full Text | Google Scholar

Saul, J. (2012). Lying, Misleading and What is Said. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780199603688.001.0001

CrossRef Full Text | Google Scholar

Sedivy, J., and Carlson, G. (2011). Sold on Language: How Advertisers Talk to You and What This Says about You. Chichester: John Wiley & Sons. doi: 10.1002/9780470978146

CrossRef Full Text | Google Scholar

Shannon, C. E., and Weaver, W. (1949). The Mathematical Theory of Communication. Urbana: University of Illinois Press.

Google Scholar

Shieber, J. (2015). Testimony: A Philosophical Introduction. New York, NY; London: Routledge. doi: 10.4324/9781315697376

CrossRef Full Text | Google Scholar

Solan, L. M., and Tiersma, P. M. (2005). Speaking of Crime: The Language of Criminal Justice. Chicago; London: University of Chicago Press. doi: 10.7208/chicago/9780226767871.001.0001

CrossRef Full Text | Google Scholar

Sperber, D., Cara, F., and Girotto, V. (1995). Relevance theory explains the selection task. Cognition 57, 31–95. doi: 10.1016/0010-0277(95)00666-M

PubMed Abstract | CrossRef Full Text | Google Scholar

Sperber, D., Clement, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., et al. (2010). Epistemic vigilance. Mind Lang. 25, 359–393. doi: 10.1111/j.1468-0017.2010.01394.x

CrossRef Full Text | Google Scholar

Sperber, D., and Wilson, D. (1995). Relevance: Communication and Cognition. Oxford: Basil Blackwell.

Google Scholar

Stokke, A. (2013). Lying, deceiving and misleading. Phil. Comp. 8, 348–359. doi: 10.1111/phc3.12022

CrossRef Full Text | Google Scholar

Strawson, P. (1950). On referring. Mind 59, 320–344. doi: 10.1093/mind/LIX.235.320

CrossRef Full Text | Google Scholar

Van Bockstaele, B., Verschuere, B., Moens, T., and Suchotski, A. (2012). Learning to lie: effects of practice on the cognitive cost of lying. Front. Psychol. 3:526. doi: 10.3389/fpsyg.2012.00526

PubMed Abstract | CrossRef Full Text | Google Scholar

von Fintel, K. (2004). “Would you believe it? The King of France is back! Presuppositions and truth-value intuitions,” in Descriptions and Beyond, ed. M. Reimer and A. Bezuidenhout (Oxford: Oxford University Press),315–341.

Google Scholar

Vullioud, C., Clement, F., Scott-Phillips, T., and Mercier, H. (2017). Confidence as an expression of commitment: why misplaced expressions of confidence backfire. Evol. Hum. Behav. 38, 9–17. doi: 10.1016/j.evolhumbehav.2016.06.002

CrossRef Full Text | Google Scholar

Zahavi, A., and Zahavi, A. (1997). The Handicap Principle: A Missing Piece of Darwin's Puzzle. New York/Oxford: Oxford University Press.

Google Scholar

Zufferey, S., Moeschler, J., and Reboul, A. (2019). Implicatures. Cambridge: Cambridge University Press. doi: 10.1017/9781316410875

CrossRef Full Text | Google Scholar

Keywords: truthfully misleading, manipulation, assertion, presupposition, implicature, informativity, vagueness

Citation: Reboul A (2021) Truthfully Misleading: Truth, Informativity, and Manipulation in Linguistic Communication. Front. Commun. 6:646820. doi: 10.3389/fcomm.2021.646820

Received: 28 December 2020; Accepted: 18 March 2021;
Published: 14 April 2021.

Edited by:

Davide Garassino, University of Zurich, Switzerland

Reviewed by:

Paola Pietrandrea, Université de Lille, France
Carlo Penco, University of Genoa, Italy

Copyright © 2021 Reboul. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Anne Reboul, areboul50@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.