In 1975, Grice introduced the notion of implicature, arguing that it was better to account for apparent lexical ambiguities through pragmatic processes than by multiplying lexical meanings (Modified Ockham’s razor: Do not multiply meanings beyond necessity). His aim was to defend the idea that logical terms (and, or, if… then, quantifiers, etc.) do not have a meaning specific to their use in natural language. Rather, or so he argued, logical terms in natural language mean exactly what they mean in logic and their lexical meaning can be read off their logical truth tables. What gives the illusion that they acquire a different meaning in natural language is that their use in conversation frequently gives rise to implicatures. The following theoretical debate centred on how the pragmatic inferences necessary to access these implicatures were produced: neo-Griceans insisted on the specificity of scalar implicatures and on the importance of lexical scales; post-Griceans rejected the idea that there was anything specific about scalar implicatures and emphasized the role of pragmatic processes.
For the past twenty years, experimental approaches have superseded purely theoretical ones, with mixed results. While paradigms using verification tasks on infelicitous sentences, with rate of pragmatic answers and reaction time as measures, have generally concluded in favour of the post-Gricean views, other paradigms have less straightforward results. And recent research has shown that lexical scales may play a role in the process in keeping with neo-Gricean views. Additionally, different scales may be quite variable as to how much they trigger pragmatic interpretations. No obvious explanation has been found for this variability. One possibility is that part of the variation may be due to the presence or absence of lexicalization of the so-called pragmatic interpretation in some scales but not in others, but this has not been tested.
If this is the case, there are a few consequences. First, one might expect some cross-linguistic variation, notably among logical words (or, if… then, quantifiers, etc.). Second, new experimental paradigms must be devised to distinguish between the cases where the so-called pragmatic meaning is lexicalized and those where it is not (as both cases may still give rise to logical entailment and to the “pragmatic” interpretation). Finally, a return to previous paradigms, notably those that assessed the cognitive costs of the pragmatic interpretations, to examine potential experimental artefacts is on the cards.
We will welcome in the present Research Topic any contribution on the questions outlined above and more generally on the interpretation of scalar implicatures.
In 1975, Grice introduced the notion of implicature, arguing that it was better to account for apparent lexical ambiguities through pragmatic processes than by multiplying lexical meanings (Modified Ockham’s razor: Do not multiply meanings beyond necessity). His aim was to defend the idea that logical terms (and, or, if… then, quantifiers, etc.) do not have a meaning specific to their use in natural language. Rather, or so he argued, logical terms in natural language mean exactly what they mean in logic and their lexical meaning can be read off their logical truth tables. What gives the illusion that they acquire a different meaning in natural language is that their use in conversation frequently gives rise to implicatures. The following theoretical debate centred on how the pragmatic inferences necessary to access these implicatures were produced: neo-Griceans insisted on the specificity of scalar implicatures and on the importance of lexical scales; post-Griceans rejected the idea that there was anything specific about scalar implicatures and emphasized the role of pragmatic processes.
For the past twenty years, experimental approaches have superseded purely theoretical ones, with mixed results. While paradigms using verification tasks on infelicitous sentences, with rate of pragmatic answers and reaction time as measures, have generally concluded in favour of the post-Gricean views, other paradigms have less straightforward results. And recent research has shown that lexical scales may play a role in the process in keeping with neo-Gricean views. Additionally, different scales may be quite variable as to how much they trigger pragmatic interpretations. No obvious explanation has been found for this variability. One possibility is that part of the variation may be due to the presence or absence of lexicalization of the so-called pragmatic interpretation in some scales but not in others, but this has not been tested.
If this is the case, there are a few consequences. First, one might expect some cross-linguistic variation, notably among logical words (or, if… then, quantifiers, etc.). Second, new experimental paradigms must be devised to distinguish between the cases where the so-called pragmatic meaning is lexicalized and those where it is not (as both cases may still give rise to logical entailment and to the “pragmatic” interpretation). Finally, a return to previous paradigms, notably those that assessed the cognitive costs of the pragmatic interpretations, to examine potential experimental artefacts is on the cards.
We will welcome in the present Research Topic any contribution on the questions outlined above and more generally on the interpretation of scalar implicatures.