Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 18 July 2024
Sec. Psychology of Language

Quantifier processing and semantic flexibility in patients with aphasia

Birte Reißner,Birte Reißner1,2Wiebke GrohmannWiebke Grohmann1Natalja PeiselerNatalja Peiseler3Joo PinhoJoão Pinho1Katja HußmannKatja Hußmann1Cornelius J. Werner,Cornelius J. Werner1,4Stefan Heim,,
Stefan Heim1,2,5*
  • 1Department of Neurology, Medical Faculty, RWTH Aachen University, Aachen, Germany
  • 2Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
  • 3Department of Linguistics, Heinrich Heine University, Düsseldorf, Germany
  • 4Johanniter Hospital Stendal, Stendal, Germany
  • 5Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich GmbH, Jülich, Germany

Processing of quantifiers such as “many” and “few” relies on number knowledge, linguistic abilities, and working memory. Negative quantifiers (e.g., “few,” “less than half”) induce higher processing costs than their positive counterparts. Furthermore, the meaning of some quantifiers is flexible and thus adaptable. Importantly, in neurotypical individuals, changing the meaning of one quantifier also leads to a generalized change in meaning for its polar opposite (e.g., the change of the meaning of “many” leads to the change of that of “few”). Here, we extended this research to patients with fluent and non-fluent aphasia after stroke. In two experiments, participants heard sentences of the type “Many/few of the circles are yellow/blue,” each followed by a picture with different quantities of blue and yellow circles. The participants judged whether the sentence adequately described the picture. Each experiment consisted of three blocks: a baseline block to assess the participants’ criteria for both quantifiers, a training block to shift the criteria for “many,” and a test block, identical to the baseline to capture any changes in quantifier semantics. In Experiment 1, the change of the meaning of “many” was induced by using adaptation to small numbers (20–50%) of circles of the named color. In Experiment 2, explicit feedback was given in the training block after each response to rate proportions of 40% (or higher) as “many,” whereas 40% is normally rather rated as “few.” The objective was to determine whether people with fluent or non-fluent aphasia were able to process quantifiers appropriately and whether generalized semantic flexibility was present after brain damage. Sixteen out of 21 patients were able to perform the task. People with fluent aphasia showed the expected polarity effect in the reaction times and shifted their criteria for “many” with generalization to the untrained quantifier “few.” This effect, however, was only obtained after explicit feedback (Experiment 2) but not by mere adaptation (Experiment 1). In contrast, people with non-fluent aphasia did not change the quantifier semantics in either experiment. This study contributes to gaining new insights into quantifier processing and semantic flexibility in people with aphasia and general underlying processing mechanisms.

Introduction

Quantifier processing

Quantifiers like “some,” “many,” “few,” “more than half” are words that describe quantities or proportions of sets and/or their relations. Having probably emerged in a process of cultural evolution (cf. Carcassi et al., 2021) they are a natural part of our everyday language and thinking. “I have worked for many hours today” or “I drank only a few cups of coffee” are examples of the numerous situations in which quantifiers are used. Quantifier processing requires number knowledge to assess quantities and their relations (Clark and Grossman, 2007; Ash et al., 2016) and language to grasp quantifiers linguistically and decode them semantically (Heim et al., 2012; Deschamps et al., 2015). Moreover, the semantic evaluation of quantifier statements requires working memory capacity (Caspari et al., 1998; Caplan and Waters, 1999, 2001; Wright and Shisler, 2005; Potagas et al., 2011; Mayer and Murray, 2012; Wright and Fergadiotis, 2012) for linking verbal to visuospatial and executive information (Baddeley, 2000, 2003). Previous studies indicated that quantifier processing differs depending on their polarity, i.e., whether they are positive or negative (Deschamps et al., 2015; Agmon et al., 2021; Grodzinsky et al., 2021): as indicated by longer reaction times (Agmon et al., 2019) processing of negative quantifiers (like “few,” “less,” which are monotone decreasing; cf. Barwise and Cooper, 1988) seems to be cognitively more demanding than processing of their positive counterparts (like “many,” “more,” which are monotone increasing) due to the additional negation they contain (“less than half” = “not more than half”). Among the word category of quantifiers, several sub-groups can be distinguished, e.g., cardinal (e.g., “five”; “at least seven”), majority (e.g., “most”), Aristotelean/logical (e.g., “all,” “some”), or parity quantifiers (e.g., “an odd number”), but also proportional quantifiers (e.g., “many”) (cf. McMillan et al., 2005; Clark and Grossman, 2007; Shikhare et al., 2015). Some types of quantifiers (e.g., majority quantifiers) are inherently more complex than either cardinal or Aristotelean quantifiers in regard to working memory demands because quantities must be counted or estimated and then memorized for comparison (Clark and Grossman, 2007).

Interestingly, the categorization of the quantifiers “many” and “few” (used in the present study) is not unequivocal. Assignment to the group of cardinal, proportional but also majority quantifiers, seems possible, depending on the context in which they are used (Bayırlı, 2022; cf. Oaksford et al., 2002; Pezzelle et al., 2018, for a proportional use of “few” when contrasted to a larger set of other quantifiers; but, e.g., Heim et al., 2015, for a majority-type of use when only “few” vs. “many” were used a quasi-polar opposites). One important characteristic of “many” and “few” is their semantic vagueness (Feiman and Snedeker, 2016; Pezzelle and Fernández, 2023). Barwise and Cooper (1988), borrowing the term from Milsark (1977; cited after Barwise and Cooper, 1988), distinguish them as “weak” quantifiers from “strong” quantifiers such as “all,” “every,” or “most.” In the context of the Generalized Quantifier Theory (Barwise and Cooper, 1988; von Fintel and Keenan, 2018; see their discussion on pp. 189–190), the weak proportional quantifiers “many” and “few” appear to violate the conservativity constraint, i.e., they may in some contexts be understood in their reverse (anti-proportional) meaning, thus resulting in a semantic variability or vagueness (see explanation and discussion in Bayırlı, 2022, see also Keenan and Stavi, 1986; Keenan, 2006; von Fintel and Keenan, 2018; Zuber and Keenan, 2019).

One aspect related to the” vagueness” of the quantifiers “few” and “many” is that their use is inherently dependent on variable internal criteria (Schöller and Franke, 2016, 2017; Heim et al., 2020b). For example, “many” cookies could mean three for one (full) person and 10 for another (hungry) one. These internal criteria are subjective and therefore depend on the individual (Ramotowska et al., 2023), but also on the object or subject to which a quantifier refers. This inter-subject variability is even further extended by a contextual variation (Schöller and Franke, 2017). The meaning of a quantifier varies depending on context in which the quantifier is used (Schöller and Franke, 2017). For example, “Few people celebrated Tom’s birthday” could mean five, whereas “Few people attended the World Cup” could refer to thousands. As previous experiments with neurotypical individuals show, these internal criteria for the preference of one quantifier over another can be changed by different learning contexts such as priming (Feiman and Snedeker, 2016), explicit reinforcement involving feedback (Heim et al., 2015, 2016, 2020a) or by adaptation (Helson, 1948; Heim et al., 2020b) and also as semantic alignment in active linguistic interaction (Pezzelle and Fernández, 2023).

The theoretical basis for this change of criteria was first formulated in the “Adaptation Level Theory” by Helson (1948). He described a frame of reference for the sensory-perceptive domain with which stimuli, like brightness of light and weights are evaluated. Helson (1948) noted that exposure to a certain stimulus intensity leads to habituation and thereby shifts the frame of reference. Heim et al. (2020b) transferred this principle to the linguistic domain, i.e., to the words for quantities instead of the quantities themselves. They demonstrated that the internal criteria for quantifiers which depend on our inner frame of reference can also be altered by habituation. In this study participants were shown pictures with different quantities of blue and yellow circles combined with a sentence containing “few” or “many.” Participants then had to judge whether the sentence correctly describes the picture. By limiting the stimuli range, i.e., only showing smaller proportions of the target color, the criteria of quantifiers were successfully shifted. And even though only one quantifier was trained to be accepted at lower proportions the evaluation of the other untrained quantifier changed as well. This illustrates that a semantic shift affects the entire frame of reference and thus also changes the criteria of the other quantifier. A change of meaning was also successfully induced in several previous experiments involving feedback, i.e., explicit reinforcement instead of adaptation (Heim et al., 2015, 2016, 2020a). Again, quantifier semantics have been modified and adapted to different ranges of proportions. In this study we complement and extend these investigations by examining a new group of subjects, who are patients with aphasia.

Quantifier processing and related cognitive-linguistic functions in people with aphasia

A stroke can cause aphasia, i.e., damage to the language system. The symptoms of aphasia after stroke vary widely and can affect all aspects of language, from speech production and comprehension to reading, writing and any combination (Behrns et al., 2010; Fridriksson et al., 2018; Le and Lui, 2022). The localization, size and form of underlying lesions are correspondingly diverse, and the impairments and language fluency differ widely depending on lesion (Turken and Dronkers, 2011; Ardila et al., 2016; Døli et al., 2021). Since the left inferior frontal cortex, and in particular area 45 in Broca’s region, are involved in accessing and processing semantic representations of quantifiers (e.g., McMillan et al., 2005; Heim et al., 2012, 2016; Ash et al., 2016; for the role of the left insula located medially to Broca’s region, cf. Grodzinsky et al., 2020), damage to the left inferior frontal cortex impairs semantic evaluation (e.g., McMillan et al., 2006; Morgan et al., 2011). It can therefore be assumed that people with non-fluent aphasia after stroke, which is usually associated with damage to the left inferior frontal cortex (Benson, 1967; Bonilha and Fridriksson, 2009; Kasselimis et al., 2015; Yourganov et al., 2015; Biesbroek et al., 2016, 2021), exhibit pronounced difficulties in the tasks testing semantic flexibility in quantifier processing. The choice of participants, i.e., people with different types of post-stroke aphasia with different underlying lesions, could therefore provide interesting insights into the processing of quantifiers. In the next paragraph, we will give a short overview of the implications of different lesion sites for differential impairments of cognitive-linguistic functions.

Since quantifiers are words and must be assessed in the context of sentences, their processing is based on the linguistic system. This includes phonological, semantic, and syntactic processing as well as executive functions such as working memory (Mirman and Thye, 2018; Garraffa and Fyndanis, 2020; Lwi et al., 2021; Akkad et al., 2023). The experiments in this study involve the evaluation of sentences containing a quantifier, e.g., “Many of the circles are blue” associated with a picture containing a certain amount of blue and yellow circles. Successful completion of the task therefore requires the processes involved in sentence comprehension (i.e., phonetic and phonological analysis, global and local syntactic structure building, semantic expectation and comparison with incoming information, processes of integration and repair, and also working memory; for a complex neurocognitive model of sentence comprehension, cf., Friederici, 2002, 2017; passim). In addition, the linguistic processing of negative quantifiers involves implicit negation (cf. Grodzinsky et al., 2021; for the comparison to non-linguistic magnitude processing see Deschamps et al., 2015). Moreover, this particular paradigm also requires aspects of visual cognition and attention needed for the Estimation of the magnitude of the set of circles of the target color as well as its Comparison with the complement set, as well as access to numerical knowledge along the mental number line (cf. Heim et al., 2012). Finally, for the adaptation process, semantic flexibility is required (please note that the exact nature of this flexibility and its relation to other aspects of, e.g., executive functions is still under investigation; for a discussion of the relevance of individual semantic features vs. comprehensive categories cf. Thompson-Schill, 2003). All these functions may be impaired in post-stroke aphasia, depending on lesion location (Mirman and Thye, 2018; for a comparison of anterior vs. posterior lesions, cf. Stockert and Saur, 2017; and Stockert et al., 2020).

Since the experiments consist of hundreds of identically structured trials (sentence-picture-pairs), each trial has similar requirements for phonological and syntactical processing. The variables that change are quantifier (“few”/ “many”) and proportions of circles of the target color set. This variation places greater demands on semantic processing in terms of a truth value judgment linking the sentence to the visual display of colored circles. In terms of lesion site, the left IFG is of particular interest, since it has been identified as crucial area not only in semantic quantifier processing (McMillan et al., 2005; Heim et al., 2012, 2016; Ash et al., 2016; for the role of the left insula located medially to Broca’s region, cf. Grodzinsky et al., 2020) but also syntactic and phonological processing (for reviews, see Hagoort, 2005; Hagoort and Indefrey, 2014). In contrast, the posterior part of the perisylvian language network [i.e., Wernicke’s region and the temporo-parietal junction area; for the (im)precision of the concept of “Wernicke’s area” in the literature see Mesulam et al., 2015] was shown to be primarily involved in the Estimation and Comparison phases (Heim et al., 2012). Since both the left and the right hemisphere contribute, a posterior lesion is probably less severe for quantifier processing than a left frontal lesion for which the right homolog is less prepared to compensate (see, e.g., the review by Wilson et al., 2019). This, in turn, means we would expect greater deficits, i.e., poorer performance when a lesion affects this frontal area, which is more likely affected in non-fluent aphasia (Benson, 1967; Bonilha and Fridriksson, 2009; Kasselimis et al., 2015; Yourganov et al., 2015; Biesbroek et al., 2016, 2021). Finally, in Broca’s region, several functionally distinct modules for semantic, syntactic, morphological, and phonological processing are located very closely to each other (e.g., Hagoort, 2005; Hagoort and Indefrey, 2014; Friederici, 2017), so frontal lesions are usually associated with both lexical and morphosyntactic deficits (which characterize Broca’s aphasia). To what extent the processing of quantifiers, and in particular of negative quantifiers, involves or even critically relies on (implicit) syntactic operations is a matter of discussion (for a recent review and discussion, cf., Brasoveanu and Dotlacil, 2019). The non-linearity observed in behavioral data when parametrically increasing the number of negations in quantifier-containing sentences (Grodzinsky et al., 2021) might at least speak against linear-incremental syntactic complexity.

The questions arise whether people with fluent and non-fluent aphasia are capable of performing the task in general and whether they show a significant semantic flexibility effect, i.e., a systematic shift of the internal criterion, as was observed in the previous studies (Heim et al., 2015, 2016, 2020a,b). Because of the considerations above we furthermore distinguish between patients with fluent aphasia (PWFA) and patients with non-fluent aphasia (PWNFA). The hypothesis based on the thoughts above would be that PWNFA demonstrate poorer results evaluating quantifiers and may also lack semantic flexibility whereas PWFA may perform comparatively better.

In two experiments we examined semantic quantifier flexibility in people of both groups using the two quantifiers “many” and “few” with different manipulations of quantifier semantics, i.e., (implicit) adaptation (as in Heim et al., 2020b) and reinforcement learning (as in Heim et al., 2015, 2016, 2020a).

Methods

The experiments were assigned to the participants in a counterbalanced order, i.e., some participants started with Experiment 1, some with Experiment 2. They were performed at least three days apart from each other usually at intervals of one week. Both experiments were conducted in German and performed on a computer using the Presentation software, version 23.0. Audio was played over a JBL Box to ensure sufficient sound volume.

Both experiments were approved by the ethics committee of RWTH Aachen University (EK 391/21). Informed written consent was obtained from all participants. Capability to consent was verified by the supervising senior physician.

Participants

Twenty-one persons diagnosed with post-stroke aphasia participated in this study. All were patients at the Uniklinik RWTH Aachen (University Hospital Aachen). The study included five women and 16 men, ranged 26–74 years (average: 56 years). They had received an average of 16 years of education (range 11–18 years). Among all patients were 15 who reported to be right-handed, four left-handed and two ambidextrous. Regarding syndromes, four PWA showed global aphasia, one was diagnosed with Wernicke’s aphasia, three with Broca’s aphasia, six with amnestic aphasia, and three with residual aphasia. One patient was diagnosed with fluid aphasia with word finding and word processing disorders. In three cases, the aphasia was unclassifiable. Aphasia was post-acute in 11 cases and chronic in 10. In two persons aphasia was caused by atypical left intracerebral hemorrhage while ischemia was reported in two patients. In two others there was left sinus vein thrombosis. Left cerebral infarction was described in 13 patients without precise etiologic differentiation between hemorrhage and ischemia. In two cases, dissection with consecutive left-sided infarction was diagnosed. Because all patients were no longer in the acute phase and had therefore not been diagnosed during the same stay, no imaging data was available and data on etiology had to be based on written reports, which differed largely in their level of detail. Unfortunately, most of the patients had no structural images of their brains taken previously (CT, MRI). For research purposes, the ethics approval did not cover additional brain scans. Moreover, the number of eligible participants would have been reduced significantly due to potential contraindications. For these reasons, the present study does not feature any anatomical images of the brain lesions but solely refers to the syndrome and observable state of fluency of the people with aphasia (PWA). Table 1 provides the clinical details of the participants.

Table 1
www.frontiersin.org

Table 1. Overview of the persons initially included in the study [clinical syndrome and severity and neuropsychological tests of working memory (Corsi Block Tapping Test: percentile; Non-Verbal Learning Test (NVLT): Percentile) and executive function (Go/No-Go Errors: Percentile)].

Criteria for participation included legal age (18 years or older), German language skills at native speaker level, time since stroke at least 6 weeks, no other independent language impairing neurological diseases (dementia or severe microangiopathy etc.) and the ability to give informed consent. The ability to give informed consent was assessed by the supervising senior physician (author JP).

All patients were hospitalized for intensified aphasia therapy at the time of the study and participated during their free hours. Their therapy was not affected at any time by the participation in the study. To assess their general ability to perform the tasks each patient was required to qualify for participation in the computer experiments by judging practice items in two rounds (please see below). Patients qualified for participation if they correctly evaluated at least five of eight practice items in both rounds (Figure 1). All those who rated less correctly did not further participate in the computer experiments and are hereafter referred to as non-participants. Of 21 patients, 16 patients qualified for participation in the experiment while five did not qualify. The final sample therefore consists of 16 people with aphasia. Out of these, one person performed below average on the Corsi Block Tapping Test (visual working memory) and two others on the Non-Verbal Learning Test, but no participant consistently performed below average on both tests. Moreover, the Go/No-Go test of the TAP battery for attention revealed no overall executive deficit for any of the participants (Table 1). One participant was only able to perform the first block in both experiments due to mental exhaustion. These data were only included in calculations on Block 1, which means that the data from 16 participants were included in the calculations for Block 1 and the data from 15 participants in the calculations for Blocks 1 and 3. Figure 2 gives an overview of the paradigms.

Figure 1
www.frontiersin.org

Figure 1. Schematic illustration of practice items in both test runs. First test run with a nonverbal version of the task (Left). PWA were presented with a printed version of a picture with blue and yellow circles with a mathematical expression underneath indicating that one color outweighs the other. The patients were instructed to evaluate whether the expression matched the picture or not. In preparation for the computer version of the experiment a keyboard was also depicted. In the second test run (Right) a sentence containing a quantifier was spoken by the examiner. Following this, a picture was presented in printed version. The patient then had to decide whether the sentence adequately described the picture.

Figure 2
www.frontiersin.org

Figure 2. Exemplary trials of blocks 1 and 3 (A) and of block 2 (B) in both experiments based on Heim et al. (2012, 2015).

Experiment 1

The aim of Experiment 1 was to determine the internal criteria for the evaluation of “many” and “few” in PWA and whether the criterion for “many” could be shifted specifically toward smaller proportions by adaptation. Furthermore, we tested if this shift also extended to the untrained quantifier “few.”

Procedure

Prior to the computer experiments all participants completed two test runs. For this purpose, selected stimuli (pictures with blue and yellow circles) from the experiments were shown in printed form and served as practice items. Each test run included eight items that had to be evaluated. See below for details on stimuli and procedure in the computer experiments. In the first practice round pictures were paired with a non-verbal expression including a comparison sign and in the second round with a quantifier containing sentence spoken by the examiner (Figure 1). This way, non-verbal quantification skills could be assessed first since linguistic interpretation was not yet required. If a patient was unable to evaluate the images, this step-by-step procedure allowed to determine whether the failure was caused by a lack of linguistic ability. Only participants who evaluated more than half of the items (at least five) correctly in both the non-verbal and verbal round and thus achieved more than guess probability qualified for participation in the computer experiments. Patients who evaluated fewer items correctly did not participate further and are referred to as non-participants. Both practice rounds as well as the computer experiments contained a truth value judgment task (cf. Figure 2) adapted from Heim et al. (2012). It was previously used in an adaptation study with neurotypical participants (Heim et al., 2020b). The task required the evaluation of sentence-picture pairs. Every sentence contained the quantifier “many” or “few” (e.g., “Many of the circles are blue”) and was presented auditorily in combination with a black screen. Each sentence was followed by a visual stimulus, i.e., a picture with different proportions of blue and yellow circles. For exact details on time sequences see Heim et al. (2020b). The stimulus consisted of a total of 50 monochrome circles of different diameters on a gray background. The proportions of the two colors varied and ranged from 20/30/40/50/60/70/80%. In the stimuli shown as practice items, one color always outweighed the other (ratio 20/80; 30/70; 40/60). I.e., no 50/50 picture was shown in which case no correct or incorrect answer would have been possible. For each proportion existed six different stimuli pictures to avoid facilitated recognition by repeating the exact same image. These stimuli versions were presented in a pseudo-randomized order. Participants decided whether the previously heard sentence adequately described the depicted distribution of colored circles by pressing a response button on the computer keyboard (YES button marked with a round green sticker, NO with a red one). As in the study of Heim et al. (2015) the position of the response button was alternated between patients but remained the same for each patient across experiments. This was intended to minimize possible influence by effects that facilitate answering with a particular response side [for discussion of the Spatial-Linguistic Association of Response Codes (SLARC) effect see Abbondanza et al. (2021)]. In addition, all participants were instructed to respond with the left hand to avoid, as far as possible, bias in the results due to motor impairments/paresis resulting from stroke. The experiment consisted of three blocks (Heim et al., 2020b) with a total of 392 trials and took approximately 1 h to complete. Participants started the computer experiment with six practice trials to become familiar with the task in the digital version. The subsequent baseline block including 112 trials recorded the patient’s initial judgment behavior allowing insights on the internal criteria for quantifier evaluation. This was followed by a training block with 168 trials. The task remained the same, but the stimuli range of the target color was limited to lower quantities (20–50%) and only the quantifier “many” was used. According to adaptation level theory (Helson, 1948) and previous findings (Heim et al., 2020b) habituation to the new stimulus range should cause a shift of the frame of reference, thereby shifting the internal criteria for “many” toward lower proportions. As a result, acceptance of lower proportions, e.g., 40% of the target color, as “many” is expected to increase. And if generalization to the untrained quantifier occurs, a decrease in acceptance of “few” would be anticipated. The third block (test block) was identical to the baseline block and recorded any change of quantifier semantics due to adaptation in the second block. Since both quantifiers were used again, differences in evaluation could be registered for the trained quantifier “many” as well as for the untrained quantifier “few.” Because the experiment was cognitively demanding, four 2-min breaks were scheduled to give time for rest.

During the experiment, acceptability judgments of quantifiers and reaction time (RT) were measured. Participants were instructed to respond correctly and as quickly as possible.

Data analysis

For analysis purposes participants were divided into two groups according to their profile of spontaneous speech ratings in the Aachener Aphasie Test (Aachen Aphasia Test, AAT; Huber et al., 1983). It is the gold standard and most widely used instrument in German-speaking countries for the diagnosis of aphasia (Huber et al., 1983; Wacker et al., 2002). Following the rationale by Lange et al. (2012), patients with a score of at least three in syntax and four in articulation were classified as people with fluent aphasia (PWFA) and all patients with scores below that as people with non-fluent aphasia (PWNFA). The statistical analyses were conducted with IBM SPSS 27.

Reaction times

In order to test whether the polarity effect, i.e., longer reaction times for negative than positive quantifiers, was also present in PWA, a linear mixed model (LMM) with “subject” as random factor and GROUP (PWFA/PWNFA), BLOCK (Block 1/Block 3), and PROPORTION (20, 30, 40, 50, 60, 70, and 80) as fixed factors was conducted. Since the participants had not explicitly been instructed to perform the task as a speed task, and since the RT data in the previous studies did not contribute to the understanding of semantic flexibility in quantifier processing, all other main effects and interactions are of no primary interest in this analysis.

Acceptability

For the purpose of this study, only the acceptability judgments of the PWA for the critical proportion “40%” were relevant. The trials featuring other proportions of circles of the named color only had the function of filler trials in the context of the potential adaptation of the participants’ responses. For this reason, the analysis of the acceptability judgments included a LMM with “subject” as random factor and GROUP (PWFA/PWNFA) and BLOCK (Block 1/Block 3) as fixed factors. Only responses for the proportion “40%” of circles of the named color were analyzed. The focus of this study was on the question whether the acceptability for the trained quantifier (“many”) changes significantly from Block 1 to Block 3, and whether this change also impacts acceptability ratings for the untrained quantifier “few.” Next, the directed (one-tailed) pair-wise comparisons of “Block 1 vs. Block 3” were calculated at proportion “40%” separately for each group and each quantifier after using the split-file command in SPSS.

Non-responses, i.e., all trials in which a participant did not respond in the given time, are reported but were not included in the analysis. Although non-responses also carry a certain informational value it is possible that some or many are caused mainly by exhaustion of attention and concentration. Since the study was mostly concerned with the actual evaluation of quantifiers, i.e., their conscious acceptance and rejection non-responses were excluded to avoid bias due to exhaustion.

Results: Experiment 1

Reaction times

The LMM yielded significant main effects for GROUP (F1;3,156 = 86.256; p < 0.001), QUANTIFIER (F1;3,156 = 14.169; p < 0.001), BLOCK (F1;3,156 = 15.729; p < 0.001), and PROPORTION (F1;3,156 = 9.509; p < 0.001). Out of the interaction terms, the following effects also reached significance: GROUP × QUANTIFIER (F1;3,156 = 9.278; p = 0.002), and QUANTIFIER × PROPORTION (F1;3,156 = 5.074; p < 0.001). All other effects were not significant. Table 2 reports the parameter estimates of the LMM. Figure 3 shows the RT as a function of BLOCK, QUANTIFIER and PROPORTION. With respect to the Polarity Effect in the RTs, PWFA had consistently higher RTs for “few” than for “many.” For PWNFA, the pattern was inconsistent. The QUANTIFIER × PROPORTION interaction was due to significant differences between the RTs for the two quantifiers at proportions 70 and 80% (both p < 0.001), while there were no differences at the other proportions (all p > 0.05 uncorrected). Figure 3 shows the reaction time data for the full sample and separately for PWFA and PWNFA.

Table 2
www.frontiersin.org

Table 2. Parameter estimates of the LMM for RTs in Experiment 1.

Figure 3
www.frontiersin.org

Figure 3. Average reaction times of all participants in Experiment 1 (adaptation) for both quantifiers (“many”, “few”) related to each proportion of circles in the target color (in %), divided in groups: all participants, PWFA and PWNFA. Reaction times are presented for blocks 1 and 3, before and after adaptation to visualize changes in the course.

Acceptability ratings

The Linear Mixed Model for the acceptability ratings at proportion 40% yielded significant effects for GROUP (F1;446 = 27.252; p < 0.001) and QUANTIFIER (F1;446 = 59.884; p < 0.001) and a significant interaction GROUP × QUANTIFIER (F1;446 = 25.749; p < 0.001). The main effect for, and interactions with BLOCK failed to reach significance (BLOCK: F1;446 = 0.416; p = 0.519; BLOCK × QUANTIFIER: F1;446 = 0.001; p = 0.973; BLOCK × GROUP: F1;446 = 0.357; p = 0.551; BLOCK × QUANTIFIER × GROUP: F1;446 = 2.051; p = 0.153). The parameter estimates are reported in Table 3, the ratings per condition and block in Figure 4. Across the entire group of participants, there were 4.5% (271) non-responses.

Table 3
www.frontiersin.org

Table 3. Parameter estimates for the LMM analysis for the acceptability ratings at proportion “40%” for Experiment 1.

Figure 4
www.frontiersin.org

Figure 4. Average acceptability of quantifiers in Experiment 1 (adaptation), divided in groups: all participants, PWFA and PWNFA. (A) Illustration of acceptability of quantifiers (“many” = black lines, “few” = gray lines) at each proportion of circles in the target color (in %) in block 1 (dashed lines) and block 3 (solid lines). (B) Average acceptability judgments for the critical proportion of circles in the target color (40%), sorted by quantifier (“many” = black bars, “few” = gray bars) and block (block 1 = dashed bars, block 3 = solid bars).

Importantly, none of the planned directed linear contrasts for each group and quantifier reached (one-tailed) significance (PWFA, “many”: p = 0.127; PWFA, “few”: p = 0.155; PWNFA, “many”: p = 0.474; PWNFA, “few”: p = 0.172). Both groups showed significant differences for their rating of “few” (Block 1: p < 0.001; Block 3: p < 0.001) but not for “many” (Block 1: p = 0.624; Block 3: p = 0.734) at the critical proportion 40%.

For the sake of direct comparability with the previous studies using the same paradigms (Heim et al., 2015, 2016, 2020a,b), the classic ANOVA on aggregated data was also conducted and is reported in Appendix A2.

Discussion: Experiment 1

The purpose of this experiment was to investigate the general processing of quantifiers in PWA and their ability to change the meaning of the quantifier “many” through adaptation. The results show a significant difference in the acceptance of the two quantifiers and the speed with which they were evaluated. Participants responded comparatively faster to “many” and accepted it more often than “few.” There was no semantic shift due to adaptation in any group. It has been suggested that the processing of negation (even if implicit in a negative quantifier) takes longer, because it is more costly, i.e., cognitively more demanding (Just and Carpenter, 1971; Deschamps et al., 2015; Agmon et al., 2021; Grodzinsky et al., 2021). That this effect is found in PWA as well as in neurotypical individuals might indicate similar processing patterns in the patient group.

However, with respect to general accuracy (reflected in the acceptability judgments), it appears that the processing difficulty for the negative quantifier “few” in comparison to the positive quantifier “many” was more pronounced in PWNFA than in PWFA, as indicated by the significant GROUP × QUANTIFIER interaction at proportion 40% (Table 3). The corresponding graphs illustrating the acceptability judgments of PWNFA (Figure 4) reflect that this subgroup had generally more difficulties processing the negative quantifier “few”: While the “many”-curves approximate the expected course, the “few”-curves are inverted (in comparison to those of the PWFA and the neurotypical participants in the earlier studies), here following roughly the course of the “many”-curves. I.e., “few” is even more often accepted when larger proportions are shown. This pattern would be consistent with the notion that the PWNFA processed “few” in the same way as “many,” i.e., using “many” as the default and failing to add the implicit negation (few = “not many”). The implications will be discussed later in more detail.

The adaptation manipulation in Block 2 failed to induce a change in the evaluation of “many” at the critical proportion 40%. No semantic shift occurred for any group regardless of speech fluency, neither for “many” nor for “few.” Consequently, the results of PWA differ substantially from those of neurotypical individuals in a previous adaptation study (Heim et al., 2020b) who showed a successful semantic shift which even included a generalization to the untrained quantifier. Adaptation processes have thus been proven to be effective in principle in neurotypical individuals. The question arises whether the absence of a semantic shift is due either to a fundamental lack of semantic flexibility in PWA or only to dysfunctional adaptation processes while semantic flexibility is principally preserved. If the latter would be true, other learning methods such as direct reinforcement may be able to induce a shift. To answer this, we conducted Experiment 2 which involves feedback instead of adaptation.

Experiment 2: feedback paradigm

With this experiment, we further investigated the internal criteria for quantifier evaluation and semantic flexibility of PWA. The question here was whether feedback (as opposed to adaptation) can trigger a semantic shift and change the criterion for “many” in the direction of lower proportions.

Methods: Experiment 2

The stimuli originate again from Heim et al. (2012). The feedback paradigm was applied previously in several studies (Heim et al., 2015, 2016, 2020a). Although very similar in its structure, this experiment differs in some important respects from the Adaptation experiment described above. The most striking difference is the method of manipulation to shift the semantics of quantifiers. Instead of limiting stimuli range targeting adaptation processes, the second block provides feedback thereby using explicit reinforcement learning techniques to change the inner criteria. Here, too, the evaluation of “many” should be changed. Each of the three blocks included 168 trials. After the initial baseline block (Block 1), feedback was displayed in the training block (Block 2) immediately following each participant response (cf. Figure 2). In case of positive feedback points were given (+10 points) combined with a green arrow pointing upward and affirmative words in German language (“Correct! Well done.”). Accordingly, negative feedback consisted of a loss of points (−10 points) with a red arrow pointing downward and a negative verbal statement (“Wrong. Keep trying”). As in Experiment 1, only the quantifier “many” was used in the second block. Positive feedback was granted, when a proportion of circles of 40% or higher was evaluated as “many,” i.e., when participants responded with YES to a stimulus with at least 40% of the target color. Negative feedback was shown when participants decided otherwise. Thus, participants were trained to accept “many” for any amount greater than or equal to 40%. According to previous studies (Heim et al., 2015, 2016, 2020a) this feedback would effectively cause a shift of internal criteria for the trained quantifier (“many”), and when generalization takes place for the untrained quantifier “few” as well. For details on time sequences and feedback see Heim et al. (2015).

Data analysis: Experiment 2

For Experiment 2 we proceeded with the analysis in the same way as for Experiment 1.

Results: Experiment 2

Reaction times

The LMM yielded significant main effects for GROUP (F1;4,715 = 168.908; p < 0.001), QUANTIFIER (F1;4,715 = 49.832; p < 0.001), BLOCK (F1;4,715 = 39.391; p < 0.001), and PROPORTION (F1;4,715 = 23.415; p < 0.001). Out of the interaction terms, the following effects also reached significance: GROUP × BLOCK (F1;4,715 = 79.607; p < 0.001), GROUP × PROPORTION (F1;4,715 = 5.356; p < 0.001), BLOCK × PROPORTION (F1;4,715 = 2.935; p = 0.007), QUANTIFIER × PROPORTION (F1;4,715 = 7.588; p < 0.001), GROUP × BLOCK × PROPORTION (F1;4,715 = 2.945; p = 0.007), and BLOCK × QUANTIFIER × PROPORTION (F1;4,715 = 2.374; p = 0.027). All other effects were not significant. Table 4 reports the parameter estimates of the LMM. Figure 5 shows the RT as a function of BLOCK, QUANTIFIER and PROPORTION. With respect to the Polarity Effect, PWFA had consistently higher RTs for “few” (Block 1: 1858 ms; Block 3: 1928 ms) than for “many” (Block 1: 1665 ms; Block 3: 1716 ms). For PWNFA, the same pattern showed somewhat less pronounced (“few”: Block 1: 2321 ms; Block 3: 1977 ms; “many”: Block 1: 2,200 ms; Block 3: 1,849 ms).

Table 4
www.frontiersin.org

Table 4. Parameter estimates of the LMM for RTs in Experiment 2.

Figure 5
www.frontiersin.org

Figure 5. Average reaction times of all participants in Experiment 2 (feedback) for both quantifiers (“many”, “few”) related to each proportion of circles in the target color (in %), divided in groups: all participants, PWFA and PWNFA. Reaction times are presented for blocks 1 and 3, before and after feedback to visualize changes in the course.

Acceptability ratings

The Linear Mixed Model for the acceptability ratings at proportion 40% yielded significant effects for GROUP (F1;665 = 6.013; p = 0.014) and QUANTIFIER (F1;665 = 76,403; p < 0.001) and a significant interaction of GROUP × QUANTIFIER (F1;665 = 21.947; p < 0.001). The main effect for, and interactions with, BLOCK failed to reach significance (BLOCK: F1;665 = 2.993; p = 0.084; BLOCK × QUANTIFIER: F1;665 = 3.104; p = 0.079; BLOCK × GROUP: F1;665 = 0.261; p = 0.610; BLOCK × QUANTIFIER × GROUP: F1;665 = 3.624; p = 0.057). The parameter estimates are reported in Table 5, the data per quantifier and block in Figure 6. Across the entire group of participants, there were 4.7% (362) non-responses.

Table 5
www.frontiersin.org

Table 5. Parameter estimates for the LMM analysis for the acceptability ratings at proportion “40%” for Experiment 2.

Figure 6
www.frontiersin.org

Figure 6. Average acceptability of quantifiers in Experiment 2 (feedback), divided in groups: all participants, PWFA and PWNFA. (A) Illustration of acceptability of quantifiers (“many” = black lines, “few” = gray lines) at each proportion of circles in the target color (in %) in block 1 (dashed lines) and block 3 (solid lines). (B) Average acceptability judgments for the critical proportion of circles in the target color (40%), sorted by quantifier (“many” = black bars, “few” = gray bars) and block (block 1 = dashed bars, block 3 = solid bars).

Regarding the planned linear contrasts for each group and quantifier at proportion “40%,” (one-tailed) significance was observed for PWFA (“many”: p < 0.001; “few”: p = 0.039; these uncorrected p-values survive the Bonferroni–Holm correction within the group of PWFA) but not for PWNFA (“many”: p = 0.216; “few”: p = 0.206).

Discussion: Experiment 2

The findings from Experiment 2 complement and extend the considerations from Experiment 1. Analyses of acceptability ratings and reaction times corroborated a significant difference between quantifier evaluation and replicated the presence of a polarity effect. Again, this effect seems more pronounced in PWNFA than in PWFA. Moreover, in contrast to the adaptation procedure of Experiment 1, feedback in Experiment 2 successfully elicited a semantic shift in a subgroup of the participants (cf. Figure 6): PWFA succeeded in adapting their inner criteria for the trained quantifier through feedback and additionally transferred this shift to the untrained quantifier, even though this effect was weaker and only present when not correcting the p-value for the number of planned linear contrasts (i.e., 2). PWNFA meanwhile showed no shift or transfer, as in Experiment 1. In terms of semantic shift, Experiment 2 provided different results than Experiment 1, thereby answering the previously posed question regarding the presence of semantic flexibility in PWA. While the entire group on average and PWNFA failed again to shift the criteria for quantifier evaluation, PWFA succeeded this time, in contrast to Experiment 1. Additionally, the shift not only affected the trained quantifier “many” but was even transferred to the untrained quantifier “few,” which had not been presented during the training block. This means that at least in PWFA, explicit reinforcement as opposed to adaptation can induce a shift, just as it does in neurotypical individuals (Heim et al., 2015, 2016). In contrast to the latter, however, this shift does not work by adaptation in patients with aphasia (Experiment 1).

As pointed out in a previous study (Heim et al., 2015), generalization implies a deeper learning level which allows to abstract and transfer semantic change of one quantifier to its polar opposite. Heim et al. (2015) argued that when acceptance of “many” at 40% is increased, it logically follows that the acceptance of “few” cannot remain the same but must decrease. Otherwise, two (quasi-polar) opposite quantifiers that semantically (at least partly, if processed as majority quantifiers) exclude each other would describe the same quantity, i.e., carry the same semantic information (Oaksford et al., 2002; Heim et al., 2015; Pezzelle et al., 2018). The change of meaning is particularly noteworthy because the exact same group of patients managed a shift when explicit reinforcement is used but not through adaptation processes alone. It follows that PWFA must in principle be semantically flexible because otherwise no shift could have occurred in Experiment 2. PWNFA, meanwhile, seem to lack this attribute as they did not show a shift in any experiment. The comparable learning success of PWFA and neurotypical individuals is also remarkable because it underlines the large differences in the performance spectrum of PWA regarding quantifier processing. Some PWA did not even qualify to participate, and some performed nearly as well as neurotypical participants.

Further analysis: investigation of influencing factors regarding non-participants

Before proceeding with the general discussion of the two experiments the cross-experiment tests and their results are reported. Not all PWA qualified in the two runs of practice trials to participate in the computer experiments. Five out of 21 PWA rated less than half of the practice items correctly, demonstrating insufficient ability to perform the task. Since this represents a distinctive difference to studies with neurotypical participants, we wanted to investigate the reasons for this in more detail. Therefore, we examined the non-participant group in comparison to participants. A Fisher test used to detect an association with fluency yielded a non-significant result (one tailed, p = 0.080) and thus no statistical connection. However, independent-samples t-tests comparing the performance of participants and non-participants in the AAT showed significant differences in the Token Test [t(19) = 4.200; p < 0.001] and the test of Language Comprehension [t(18) = 4.614; p < 0.001]. Participants performed noticeably better than non-participants (cf. Figure 7).

Figure 7
www.frontiersin.org

Figure 7. Performance of participants and non-participants in the token test and language comprehension test of the AAT in comparison. (A) Demonstration of the T-value achieved by each patient in both tests, plotted separately for non-participants (dashed columns) and participants (solid columns). (B) Average T-values achieved by non-participants and participants in both tests. With regard to the language comprehension test, in one case no test value was available, which is why only 20 instead of 21 test subjects are listed on the x-axis. The data are reported in ascending order of values. The participant numbers in panels (A,B) only indicate order of values and do not identify individual participants.

General discussion

This study investigated quantifier processing and semantic flexibility in PWA. Results of both experiments showed varying degrees of limitations concerning quantifier processing in PWA. While 16 PWA were sufficiently capable of processing quantifiers to participate, five were too severely impaired. Among participants, performances differed significantly. PWNFA showed no semantic flexibility, whereas PWFA demonstrated a semantic shift when feedback was used. The adaptation paradigm failed to evoke a shift in the total sample and in either sub-group. However, in accordance with findings of neurotypical participants in the literature, a Polarity effect was observed, indicating higher processing costs for the negative quantifiers. We will now discuss the implications of these findings for quantifier processing in PWA.

Semantic flexibility in quantifier processing in PWA

The central question of the present study was: Can the internal criterion for “many” be changed in PWA like in neurotypical individuals, i.e., do PWA show semantic flexibility? And if so, to which degree – does this semantic flexibility also extend, and thus generalize, to the polar opposite (here: “few”)? The statistical analysis yielded that no shift regardless of group (all, PWFA, PWNFA) occurred in Experiment 1. The curves illustrate this (cf. Figure 4): acceptability of neither “many” nor “few” at 40% shifted markedly in any direction. A similar result was found in Experiment 2, no semantic shift occurred in the group of PWA on average nor in the subgroup of PWNFA.

Importantly, however, PWFA did successfully change their criterion for “many.” They even generalized, i.e., transferred the change of internal criteria from the trained quantifier “many” to the untrained “few.” It follows that PWFA do not only show semantic flexibility but are able to transfer the learning success. As in neurotypical people, when the semantics of a quantifier is changed, it affects the entire quantifier scope as well, thus changing the criterion for the untrained quantifier (Heim et al., 2015). In contrast, PWNFA lack these abilities. The question thus emerges what role fluency of aphasia plays for the presence of semantic flexibility. And could it be that fluency of aphasia is not only related to the aspect of semantic flexibility, but also to quantifier processing in general?

The relationship of fluency of aphasia and semantic flexibility

Fluency in aphasia is of particular importance in this study as it distinguishes two groups with very different results. The literature suggests that this could be related to a partial overlap between the neural basis of quantifier processing and semantic evaluation. The left IFG seems to be an integral area for both functions, fluency of aphasia (for a review see Mirman and Thye, 2018) and also semantic evaluation (Heim et al., 2012; McMillan et al., 2013; Wei et al., 2014) and re-evaluation of quantifiers (Heim et al., 2016, 2020a). Moreover, Broca’s region in the left IFG has been implicated in various types of semantic processing in the clinical and neuroimaging literature for 30 years. Such studies investigated, among others, access to categorial semantics (e.g., Martin et al., 1996; Vandenberghe et al., 1996), degrees of semantic control (e.g., Thompson-Schill et al., 1997), lexical access and retrieval (Damasio et al., 1996; Thompson-Schill et al., 1999; Amunts et al., 2004; Heim et al., 2009a,b; Hartwigsen, 2016), or semantic evaluation of sentences (Hagoort and van Berkum, 2007; for reviews cf. Binder et al., 2009; Ralph et al., 2017). While these studies seem to make a distinction between semantic representation (left temporal) and controlled access to, retrieval or evaluation of these representations (left inferior frontal), the underlying concepts of “semantics” and of “representation” or “processing” (etc.) vary substantially. What one can draw from the wealth of studies is the consistency of left inferior frontal involvement across a multitude of paradigms, which explicitly involves verbal fluency, and that damage to the left IFG may impair word retrieval (e.g., Thompson-Schill et al., 1998).

In the context of the findings of the present study, one might thus suspect that if a patient suffers from an impairment of speech fluency it could be possibly due to a lesion in this frontal region. Therefore, it would be conceivable that quantifier processing might be impaired as well because it is based on the same neural area. Conversely, one would suspect that if speech is fluent, there is more likely no frontal lesion (Benson, 1967; Pendleton et al., 1982; Cipolotti et al., 2021), and thus quantifier processing might also be less impaired. However, disorders of speech fluency can also be caused by lesions in other localizations than the left IFG, e.g., insula, precentral areas (Wilson et al., 2010; Fridriksson et al., 2013), anterior temporal lobe (Schwartz et al., 2009; Walker et al., 2011; Chen et al., 2019; Mirman et al., 2019) and inferior parietal regions (Rogalsky et al., 2015; Mirman et al., 2019) as well as white matter regions like the uncinate fasciculus, the anterior segment of the left arcuate fasciculus and the aslant tract (Catani et al., 2013; Fridriksson et al., 2013; Basilakos et al., 2014; Yourganov et al., 2015). Therefore, frontal areas relevant for quantifiers may well be intact even if speech fluency is impaired.

The present study contributes to the literature that fluency in aphasia seems to be a relevant factor. Since this is not a neuroimaging study, and exact information about lesion location and size was not available from the clinical records, a logical next step would be to transfer the feedback paradigm used in Experiment 2 into the scanner in order to gain further insights into the alterations of functional neuroanatomy of quantifier processing in aphasia.

Such a functional neuroimaging study would also provide the chance to investigate the nature of the difficulty of the PWNFA to process negative quantifiers: Is this an issue of the semantic representation of quantifiers per se, or rather, of their semantic evaluation? As pointed out above, in the domain of semantic processing, many different aspects have been attributed to different parts of Broca’s region in the IFG. Most of these aspects are related to “processing,” i.e., controlled, deliberate or increasingly difficult retrieval of, or access to, semantic information which seems rather “stored,” or represented, in (among others) the left temporal lobe, predominantly its inferior part, with the temporal pole as a potential hub (e.g., Ralph et al., 2017). If one supposes that a non-fluent variant of aphasia is likely caused by a lesion to Broca’s region and its surroundings (but see the difficulties of such reasoning elaborated above), this would imply difficulties in quantifier “processing,” presumably their semantic evaluation, rather than their representation. Several studies in the literature support this view. (1) In their neuroimaging study, Heim et al. (2012) adapted the triple code model by Dehaene et al. (2005) to quantifier processing in a truth value judgment task like the one used in the present study. The first two stages, estimation (of the size/magnitude of the set of circles in the given color) and comparison (of that set to the complement set), were supported by a large, mostly fronto-parietal, network in the left and also in the right hemisphere. In contrast, the Polarity effect, taken as a proxy for the semantic evaluation (i.e., the third stage), was very focal in area 45 of Broca’s region. This location of the Polarity effect was later replicated by Agmon et al. (2021). The findings are in line with the earlier observation by McMillan et al. (2005) that higher-order quantifiers that require an additional processing step induce higher activation in Broca’s region than first-order quantifiers. In other words, increasing processing demands, be they the resolution of an implicit negation in a negative quantifier or the additional semantic computation in a higher-order quantifier, are associated with stronger recruitment of Broca’s region. Given this pattern, one might be tempted to speculate that it is the semantic evaluation rather than the semantic representation of quantifiers that is impaired in the PWNFA. This hypothesis would be commensurate with the observation that the PWNFA in the present study seemed to have selective impairments in processing the negative quantifier “few,” for which the curves of the acceptability ratings should have been some kind of mirror image of those for “many” – rather than running roughly in parallel.

One other, linguistic, aspect in context of semantic processing needs consideration. As outlined in the introduction, the “weak” quantifiers “many” and “few” not only allow testing semantic flexibility, but they can also be processed either as proportional or as majority quantifiers. In the studies by Oaksford et al. (2002), Heim et al. (2012), or Pezzelle et al. (2018), the judgment of” few” and” many” crucially depended on the reference quantity, in this case the quantity of dots of the complement set the color of which had not been mentioned in the stimulus sentence. On the other hand, in the studies by Heim et al. (2015, 2020a,b) and Shikhare et al. (2015), “many” and “few” could be alternatively used as quasi-polar opposites, and thus, as majority quantifiers. However, it should be noted that the participants never had to decide directly between the two quantifiers, e.g., in a multiple choice setting. Instead, as in the studies by, e.g., Oaksford et al. (2002) and Pezzelle et al. (2018), each individual trial required evaluation whether the mentioned quantifier (out of two, or, in the 2012 study, out of six) appropriately described one particular scene of blue and yellow dots. This makes it more likely that “many” and “few” were also in these experiments, and therefore, also in the present one, processed as proportional rather than majority quantifiers.

Furthermore, one might wonder whether not a problem in the Semantic Evaluation but in one of the preceding processes (Estimation, Comparison, related to visual approximation of the magnitude of the target and complement set) was the reason for the observed difficulties for the processing of the negative quantifier “few” in PWNFA. Since both processes rely on posterior regions in particular in the IPS and IPL which are likely damaged in PWNFA, such impairments would indirectly also result in impediments in the subsequent semantic evaluation and truth value judgment. We cannot fully rule out this explanation. It is, however, not very likely, for two reasons. First, as reported in the Introduction, Estimation and Comparison are supported by bilateral regions in the IPL and IPS, i.e., also in the unimpaired hemisphere, whereas semantic evaluation relies exclusively on frontal areas in the left hemisphere where damage has thus much more grave consequences. Second, the processes of Estimation and Comparison are identical for both quantifiers and groups, since the same picture set with the same proportions of blue and yellow circles were presented. If a visual/perceptual or numerosity-related process was the reason for the worse performance in PWNFA, this should occur for both quantifiers, not just for “few,” as was the case in our results. Thus, the explanation of a semantic deficit has, in our view, a higher plausibility.

Finally, there is the possibility that the patients have a general cognitive or language deficit. With the inclusion criteria and the initial trial run before the actual experiments, we tried to exclude this option right from the start. The data we obtained from the participants further speak against this option. Since the non-fluent patients were not below PR = 16 in Corsi and NVLT (see Table 1), a general cognition deficit cannot likely be assumed. A “general language” deficit is also rather unlikely, as the accuracy in Block 1 for “many” at extreme proportions was very high, which would not be possible if one assumes a general language deficit undermining all evaluation processes.

Semantic flexibility in quantifier processing in aphasia: feedback vs. adaptation

The next question is why feedback succeeded to cause a semantic shift in PWFA, but adaptation did not? The combined analysis shows that the experiment, i.e., the choice of learning method, is indeed relevant for the occurrence of a shift in PWFA. However, this does not yet clarify the cause of why feedback is effective, but adaptation is not. In neurotypical participants both methods successfully shifted the meaning of both quantifiers (Heim et al., 2015, 2016, 2020b). This indicates that the absence of a shift, the lack of adaptation is connected to aphasia, respectively, the lesion causing it.

One could suspect that explicit reinforcement techniques involving external feedback provide a stronger learning impulse than a subtle change of stimuli range that must be noticed subconsciously. Adaptation in the context of this experimental paradigm requires implicit learning. This means “learning without awareness” or learning “without a conscious intention” (Seger, 1994; Schuchard and Thompson, 2014), as it may happen in natural language use in dialogs (Pezzelle and Fernández, 2023). In this case, learning to call a proportion of 40% of colored circles “many.” The participants were not informed in advance that the stimuli selection would be restricted in the second block or that it would be tested to what extent this would influence their evaluation behavior. Since each trial lasted only about 6 s, there was extremely little to no time to consciously think about an overarching pattern leaving only room for subconscious learning. Adaptation as a more subtle form of learning might be more likely to be impaired in the case of a cognitive processing disorder, e.g., due to a lesion after stroke. After all, feedback offers the possibility to consciously react to it and to actively change evaluation behavior. Previous studies on implicit learning in PWA yielded inconsistent results on this subject. Vadinova et al. (2020) found that PWA with frontal and posterior lesions showed impairments in “implicit statistical learning (ISL).” This matches our results in that neither PWFA nor PWNFA, patients with probably differently localized lesions (Benson, 1967; Cipolotti et al., 2021) achieved a semantic shift by adaptation. But Vadinova et al. (2020) also noted that ISL is “not completely absent” but still possible to a limited extent. Schuchard and Thompson (2014) even demonstrated in their study that implicit learning in PWA is less impaired than explicit learning, which they suggest causes extra load on working memory because it requires the additional processing of feedback. Schuchard et al. (2017) partly supported these findings which appear to contradict our results. Nevertheless, it is possible that participants of our study also possessed limited implicit learning abilities, but that these were insufficient to achieve adaptation in this setting. In line with this thought, Schuchard et al. (2017) also stated that implicit learning is possible but not always successful. Overall, the study situation remains inconclusive and ambiguous and subject to debate. Our findings may inspire further studies to examine learning capacities of PWFA and PWNFA in more detail. These may also include different task settings, as Bremnes et al. (2022) demonstrated differential brain responses for verification vs. comprehension tasks.

Task difficulty and participants vs. non-participants

Some PWA passed the initial screening for participation, while others failed to show sufficiently good performance. Thus, there seems to be substantial (and perhaps systematic) variability in PWAs’ abilities for quantifier processing. Interestingly, the polarity effect (i.e., higher processing demands for negative quantifiers, reflected in longer RTs), which is a robust finding in neurotypical persons (Heim et al., 2012, 2020b; Agmon et al., 2019, 2021) was also found in both experiments of the present study (i.e., the main effect of QUANTIFIER). This indicates that those participants who passed the initial screening had no apparent deficits in judging quantified statements in general even in the case of increased processing cost associated with the implicit negation in “few” (Just and Carpenter, 1971; Deschamps et al., 2015; Shikhare et al., 2015; Heim et al., 2020b; Agmon et al., 2021; Grodzinsky et al., 2021). Note that the in-depth analysis of the Polarity effect, i.e., the separate analysis for the two sub-groups, again demonstrated that the PWFA had a more consistent effect than the group of PWNFA. The GROUP × QUANTIFIER interaction results from the difference in acceptance at the critical proportion between quantifiers. This difference in acceptability is more pronounced for PWFA, who demonstrate very different acceptance levels of “few” and “many” at 40%, and less pronounced for PWFNA, which accept “few” and “many” at 40% with almost similar frequency. As briefly described in the discussion of Experiment 1, we interpret this as an attenuated polarity effect in PWNFA, i.e., PWNFA cannot process the negative quantifier as well or differentiate it from the positive quantifier, resulting in similar acceptance levels at the same proportion (see Figure 4). Thus, fluency of aphasia seems to play some relevant role, both for quantifier processing in general and also for the semantic flexibility.

To identify potential causes for non-participation, we compared some characteristics (fluency and performance in clinical language tests) between the two groups (participants and non-participants). Non-participants performed significantly worse in the Token Test and language comprehension test (cf. Figure 7) of the AAT indicating a generally higher level of severity of aphasia and worse semantic processing (Willmes et al., 1983). Thus, at least in some respects the linguistic abilities of non-participants appear to be significantly worse than those of the participants. As indicated in the introduction, working memory could be an important limiting factor. Since majority quantifiers place disproportionately higher demands on working memory and previous studies have repeatedly found that aphasia patients have impairments in working memory (Caspari et al., 1998; Friedmann and Gvion, 2003; Wright and Shisler, 2005; Sung et al., 2009; Christensen and Wright, 2010; Potagas et al., 2011; Mayer and Murray, 2012; Wright and Fergadiotis, 2012), it is expected that processing these quantifiers relates to difficulties, especially when it comes to equivocal sentence-picture pairs which related to higher processing cost in previous studies (McMillan et al., 2013).

As described above (Table 1), the non-verbal memory performance in the Corsi block tapping test showed that two PWA have a critical percentile rank (PR) but in the NVLT the PR is within normal range. Conversely, four PWA have an abnormal PR in the NVLT, but the Corsi is normal. In the TAP (Go-NoGo), all percentile ranks are in the average range or above. We therefore argue that the brain damage of PWA in this study does not cause global cognitive or non-verbal working memory problems, but only very isolated abnormalities. This is consistent with the fact that the positive quantifier “many” could be processed almost as well by PWA as by healthy people, even though it also classifies as a majority quantifier.

Limitations

Our conclusions are limited by the absence of imaging data, i.e., information on lesion location and size. Functional imaging studies are strongly recommended to further investigate and validate relevant anatomical areas in more detail and relate lesion location, regions for speech fluency and quantifier processing. In addition, it would be advisable to collect more data on PWNFA. Because fewer patients who qualified for participation in this study were diagnosed with non-fluent aphasia, the results might be less sustainable and would benefit from a larger sample size. Since we only used the two quantifiers “many” and “few,” it also remains unclear whether and how other quantifiers would be processed by PWA. Investigating processing of other quantifier types in PWA could provide interesting additional insights. Maybe quantifiers containing numbers thus providing a specific external reference could be easier processed compared to more “fuzzy” quantifiers with primarily internal reference. Moreover, it may be useful to further investigate whether or to what extent implicit learning is limited compared to explicit learning in PWA, more specifically, comparing learning success between PWFA and PWNFA.

We tested here the result of a sequence of more than three processing steps (Heim et al., 2012) and see different outcomes. From our work, the individual processing steps cannot be traced. Consequently, it is also not possible to determine which processing step does not work for whom. For this, further studies are needed, designed to capture the individual steps of processing.

Finally, there was no control group included in this study, so no direct comparison of the size of the semantic flexibility effect in PWFA and neurotypical people can be made. However, the primary goal of this first study of the effect in PWA was to see whether at all, and if so, in which paradigm there would be statistically significant adaptation of the internal criterion. In subsequent studies, one can now focus on the feedback paradigm from Experiment 2, extend the setting by a neurotypical matched control group, and also acquire neurophysiological or hemodynamic data from both groups.

Conclusion

This study examined quantifier processing in patients with aphasia. The results demonstrate varying degrees of impairment. While some patients were unable to participate, others performed well with response patterns similar to neurotypical individuals. The polarity effect is noticeable in RT data and acceptability judgments. Especially PWNFA showed clear difficulties in the evaluation of the negative quantifier and failed to achieve a semantic shift in both experiments. In contrast, a shift could be induced in PWFA, but only by feedback, not by adaptation. It appears that implicit learning may be impaired in PWA. Moreover, since PWFA and PWNFA show strikingly different performance levels there seems to be a link between fluency of speech and quantifier processing. Further studies are recommended to explore these connections.

Data availability statement

The datasets generated for this study are available on request to the corresponding author.

Ethics statement

The experiments were approved by the ethics committee of RWTH Aachen University (EK 391/21). Informed written consent was obtained from all participants. Capability to consent was verified by the supervising senior physician.

Author contributions

BR: Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft, Writing – review & editing. WG: Investigation, Methodology, Writing – review & editing. NP: Methodology, Software, Writing – review & editing. JP: Methodology, Resources, Writing – review & editing. KH: Methodology, Resources, Writing – review & editing. CW: Conceptualization, Project administration, Writing – review & editing. SH: Conceptualization, Data curation, Methodology, Project administration, Resources, Supervision, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

We would like to thank Lea Plum for her collaboration in data collection on the aphasia ward of the University Hospital Aachen.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1328853/full#supplementary-material

References

Abbondanza, M., Rinaldi, L., Foppolo, F., and Marelli, M. (2021). The mental representation of non-numerical quantifiers: the spatial-linguistic Association of Response Codes (SLARC) effect. Washington, US: Center for Open Science.

Google Scholar

Agmon, G., Bain, J. S., and Deschamps, I. (2021). Negative polarity in quantifiers evokes greater activation in language-related regions compared to negative polarity in adjectives. Exp. Brain Res. 239, 1427–1438. doi: 10.1007/s00221-021-06067-y

PubMed Abstract | Crossref Full Text | Google Scholar

Agmon, G., Loewenstein, Y., and Grodzinsky, Y. (2019). Measuring the cognitive cost of downward monotonicity by controlling for negative polarity. Glossa J. General Linguist. 4:1–18. doi: 10.5334/gjgl.770

Crossref Full Text | Google Scholar

Akkad, H., Hope, T. M. H., Howland, C., Ondobaka, S., Pappa, K., Nardo, D., et al. (2023). Mapping spoken language and cognitive deficits in post-stroke aphasia. NeuroImage. Clinical 39:103452. doi: 10.1016/j.nicl.2023.103452

Crossref Full Text | Google Scholar

Amunts, K., Weiss, P. H., Mohlberg, H., Pieperhoff, P., Eickhoff, S., Gurd, J. M., et al. (2004). Analysis of neural mechanisms underlying verbal fluency in cytoarchitectonically defined stereotaxic space--the roles of Brodmann areas 44 and 45. NeuroImage 22, 42–56. doi: 10.1016/j.neuroimage.2003.12.031

Crossref Full Text | Google Scholar

Ardila, A., Bernal, B., and Rosselli, M. (2016). How localized are language brain areas? A review of Brodmann areas involvement in Oral language. Arch. Clin. Neuropsychol. 31, 112–122. doi: 10.1093/arclin/acv081

Crossref Full Text | Google Scholar

Ash, S., Ternes, K., Bisbing, T., Min, N. E., Moran, E., York, C., et al. (2016). Dissociation of quantifiers and object nouns in speech in focal neurodegenerative disease. Neuropsychologia 89, 141–152. doi: 10.1016/j.neuropsychologia.2016.06.013

PubMed Abstract | Crossref Full Text | Google Scholar

Baddeley, A. (2000). The episodic buffer: a new component of working memory? Trends Cogn. Sci. 4, 417–423. doi: 10.1016/S1364-6613(00)01538-2

PubMed Abstract | Crossref Full Text | Google Scholar

Baddeley, A. (2003). Working memory and language: An overview. Journal of communication disorders, 36, 189–208.

Google Scholar

Barwise, J., and Cooper, R. (1988). “Generalized quantifiers and natural language” in Philosophy, language, and artificial intelligence: resources for processing natural language (Dordrecht: Springer Netherlands), 241–301.

Google Scholar

Basilakos, A., Fillmore, P. T., Rorden, C., Guo, D., Bonilha, L., and Fridriksson, J. (2014). Regional white matter damage predicts speech fluency in chronic post-stroke aphasia. Front. Hum. Neurosci. 8:845. doi: 10.3389/fnhum.2014.00845

PubMed Abstract | Crossref Full Text | Google Scholar

Bayırlı, İ. K. (2022). Glossa: a journal of general linguistics, 7, London, UK: Open Library of Humanities.

Google Scholar

Behrns, I., Ahlsen, E., and Wengelin, A. (2010). Aphasia and text writing. Int. J. Lang. Commun. Disord. 45, 230–243. doi: 10.3109/13682820902936425

Crossref Full Text | Google Scholar

Benson, D. F. (1967). Fluency in aphasia: correlation with radioactive scan localization. Cortex 3, 373–394. doi: 10.1016/S0010-9452(67)80025-X

Crossref Full Text | Google Scholar

Biesbroek, J. M., Lim, J.-S., Weaver, N. A., Arikan, G., Kang, Y., Kim, B. J., et al. (2021). Anatomy of phonemic and semantic fluency: a lesion and disconnectome study in 1231 stroke patients. Cortex 143, 148–163. doi: 10.1016/j.cortex.2021.06.019

Crossref Full Text | Google Scholar

Biesbroek, J. M., van Zandvoort, M. J. E., Kappelle, L. J., Velthuis, B. K., Biessels, G. J., and Postma, A. (2016). Shared and distinct anatomical correlates of semantic and phonemic fluency revealed by lesion-symptom mapping in patients with ischemic stroke. Brain Struct. Funct. 221, 2123–2134. doi: 10.1007/s00429-015-1033-8

PubMed Abstract | Crossref Full Text | Google Scholar

Binder, J. R., Desai, R. H., Graves, W. W., and Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex 19, 2767–2796. doi: 10.1093/cercor/bhp055

PubMed Abstract | Crossref Full Text | Google Scholar

Bonilha, L., and Fridriksson, J. (2009). Subcortical damage and white matter disconnection associated with non-fluent speech. Brain 132:e108. doi: 10.1093/brain/awn200

PubMed Abstract | Crossref Full Text | Google Scholar

Brasoveanu, A., and Dotlacil, J. (2019). “Processing quantification” in The Oxford handbook of experimental semantics and pragmatics. eds. C. Cummins and N. Katsos (Oxford, UK: Oxford University Press).

Google Scholar

Bremnes, H. S., Szymanik, J., and Baggio, G. (2022). Computational complexity explains neural differences in quantifier verification. Cognition 223:105013. doi: 10.1016/j.cognition.2022.105013

PubMed Abstract | Crossref Full Text | Google Scholar

Caplan, D., and Waters, G. S. (1999). Verbal working memory and sentence comprehension. Behav. Brain Sci. 22, 77–94. doi: 10.1017/s0140525x99001788

Crossref Full Text | Google Scholar

Caplan, D., and Waters, G. (2001). Working memory and syntactic processing in sentence comprehension. Cogn. Stud. 8, 10–24. doi: 10.11225/jcss.8.10

Crossref Full Text | Google Scholar

Carcassi, F., Steinert-Threlkeld, S., and Szymanik, J. (2021). Monotone quantifiers emerge via iterated learning. Cogn. Sci. 45:e13027. doi: 10.1111/cogs.13027

PubMed Abstract | Crossref Full Text | Google Scholar

Caspari, I., Parkinson, S. R., LaPointe, L. L., and Katz, R. C. (1998). Working memory and aphasia. Brain Cogn. 37, 205–223. doi: 10.1006/brcg.1997.0970

Crossref Full Text | Google Scholar

Catani, M., Mesulam, M. M., Jakobsen, E., Malik, F., Martersteck, A., Wieneke, C., et al. (2013). A novel frontal pathway underlies verbal fluency in primary progressive aphasia. Brain 136, 2619–2628. doi: 10.1093/brain/awt163

PubMed Abstract | Crossref Full Text | Google Scholar

Chen, Q., Middleton, E., and Mirman, D. (2019). Words fail: lesion-symptom mapping of errors of omission in post-stroke aphasia. J. Neuropsychol. 13, 183–197. doi: 10.1111/jnp.12148

PubMed Abstract | Crossref Full Text | Google Scholar

Christensen, S. C., and Wright, H. H. (2010). Verbal and non-verbal working memory in aphasia: what three n-back tasks reveal. Aphasiology 24, 752–762. doi: 10.1080/02687030903437690

Crossref Full Text | Google Scholar

Cipolotti, L., Xu, T., Harry, B., Mole, J., Lakey, G., Shallice, T., et al. (2021). Multi-model mapping of phonemic fluency. Brain Commun 3:fcab232. doi: 10.1093/braincomms/fcab232

PubMed Abstract | Crossref Full Text | Google Scholar

Clark, R., and Grossman, M. (2007). Number sense and quantifier interpretation. Topoi 26, 51–62. doi: 10.1007/s11245-006-9008-2

Crossref Full Text | Google Scholar

Damasio, H., Grabowski, T. J., Tranel, D., Hichwa, R. D., and Damasio, A. R. (1996). A neural basis for lexical retrieval. Nature 380, 499–505. doi: 10.1038/380499a0

Crossref Full Text | Google Scholar

Dehaene, S., Piazza, M., Pinel, P., and Cohen, L. (2005). “Three parietal circuits for number processing” in The handbook of mathematical cognition. Eds. J. I. D. Campbell (London, UK: Psychology Press), 433–453.

Google Scholar

Deschamps, I., Agmon, G., Loewenstein, Y., and Grodzinsky, Y. (2015). The processing of polar quantifiers, and numerosity perception. Cognition 143, 115–128. doi: 10.1016/j.cognition.2015.06.006

PubMed Abstract | Crossref Full Text | Google Scholar

Døli, H., Andersen Helland, W., Helland, T., and Specht, K. (2021). Associations between lesion size, lesion location and aphasia in acute stroke. Aphasiology 35, 745–763. doi: 10.1080/02687038.2020.1727838

Crossref Full Text | Google Scholar

Feiman, R., and Snedeker, J. (2016). The logic in language: how all quantifiers are alike, but each quantifier is different. Cogn. Psychol. 87, 29–52. doi: 10.1016/j.cogpsych.2016.04.002

PubMed Abstract | Crossref Full Text | Google Scholar

Fridriksson, J., den Ouden, D.-B., Hillis, A. E., Hickok, G., Rorden, C., Basilakos, A., et al. (2018). Anatomy of aphasia revisited. Brain 141, 848–862. doi: 10.1093/brain/awx363

PubMed Abstract | Crossref Full Text | Google Scholar

Fridriksson, J., Guo, D., Fillmore, P., Holland, A., and Rorden, C. (2013). Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia. Brain 136, 3451–3460. doi: 10.1093/brain/awt267

PubMed Abstract | Crossref Full Text | Google Scholar

Friedmann, N., and Gvion, A. (2003). Sentence comprehension and working memory limitation in aphasia: a dissociation between semantic-syntactic and phonological reactivation. Brain Lang. 86, 23–39. doi: 10.1016/S0093-934X(02)00530-8

Crossref Full Text | Google Scholar

Friederici, A. D. (2002). Towards a neural basis of auditory sentence processing.. Trends in Cognitive Sciences, 6, 78–84. doi: 10.1016/S1364-6613(00)01839-8

Crossref Full Text | Google Scholar

Friederici, A. D. (2017). Language in our brain: The origins of a uniquely human capacity. MIT Press. Cambridge, Massachussets, US.

Google Scholar

Garraffa, M., and Fyndanis, V. (2020). Linguistic theory and aphasia: an overview. Aphasiology 34, 905–926. doi: 10.1080/02687038.2020.1770196

Crossref Full Text | Google Scholar

Grodzinsky, Y., Behrent, K., Agmon, G., Bittner, N., Jockwitz, C., Caspers, S., et al. (2021). A linguistic complexity pattern that defies aging: the processing of multiple negations. J. Neurolinguistics 58:100982. doi: 10.1016/j.jneuroling.2020.100982

Crossref Full Text | Google Scholar

Grodzinsky, Y., Deschamps, I., Pieperhoff, P., Iannilli, F., Agmon, G., Loewenstein, Y., et al. (2020). Logical negation mapped onto the brain. Brain Struct. Funct. 225, 19–31. doi: 10.1007/s00429-019-01975-w

Crossref Full Text | Google Scholar

Hagoort, P. (2005). On Broca, brain, and binding: a new framework. Trends Cogn. Sci. 9, 416–423. doi: 10.1016/j.tics.2005.07.004

Crossref Full Text | Google Scholar

Hagoort, P., and Indefrey, P. (2014). The neurobiology of language beyond single words. Annu. Rev. Neurosci. 37, 347–362. doi: 10.1146/annurev-neuro-071013-013847

PubMed Abstract | Crossref Full Text | Google Scholar

Hagoort, P., and van Berkum, J. (2007). Beyond the sentence given. Philos. Trans. R. Soc. B 362, 801–811. doi: 10.1098/rstb.2007.2089

Crossref Full Text | Google Scholar

Hartwigsen, G. (2016). Adaptive plasticity in the healthy language network: implications for language recovery after stroke. Neural Plast. 2016, 9674790–9674718. doi: 10.1155/2016/9674790

PubMed Abstract | Crossref Full Text | Google Scholar

Heim, S., Amunts, K., Drai, D., Eickhoff, S. B., Hautvast, S., and Grodzinsky, Y. (2012). The language-number interface in the brain: a complex parametric study of quantifiers and quantities. Front. Evol. Neurosci. 4:4. doi: 10.3389/fnevo.2012.00004

PubMed Abstract | Crossref Full Text | Google Scholar

Heim, S., Eickhoff, S. B., and Amunts, K. (2009a). Different roles of cytoarchitectonic BA 44 and BA 45 in phonological and semantic verbal fluency as revealed by dynamic causal modelling. NeuroImage 48, 616–624. doi: 10.1016/j.neuroimage.2009.06.044

PubMed Abstract | Crossref Full Text | Google Scholar

Heim, S., Eickhoff, S. B., Friederici, A. D., and Amunts, K. (2009b). Left cytoarchitectonic area 44 supports selection in the mental lexicon during language production. Brain Struct. Funct. 213, 441–456. doi: 10.1007/s00429-009-0213-9

Crossref Full Text | Google Scholar

Heim, S., McMillan, C. T., Clark, R., Baehr, L., Ternes, K., Olm, C., et al. (2016). How the brain learns how few are "many": an fMRI study of the flexibility of quantifier semantics. NeuroImage 125, 45–52. doi: 10.1016/j.neuroimage.2015.10.035

PubMed Abstract | Crossref Full Text | Google Scholar

Heim, S., McMillan, C. T., Clark, R., Golob, S., Min, N. E., Olm, C., et al. (2015). If so many are “few,” how few are “many”? Front. Psychol. 6:441. doi: 10.3389/fpsyg.2015.00441

PubMed Abstract | Crossref Full Text | Google Scholar

Heim, S., McMillan, C. T., Olm, C., and Grossman, M. (2020a). So many are “few,” but so few are also “few” – reduced semantic flexibility in bvFTD patients. Front. Psychol. 11:582. doi: 10.3389/fpsyg.2020.00582

PubMed Abstract | Crossref Full Text | Google Scholar

Heim, S., Peiseler, N., and Bekemeier, N. (2020b). “Few” or “many”? An adaptation level theory account for flexibility in quantifier processing. Front. Psychol. 11:382. doi: 10.3389/fpsyg.2020.00382

PubMed Abstract | Crossref Full Text | Google Scholar

Helson, H. (1948). Adaptation-level as a basis for a quantitative theory of frames of reference. Psychol. Rev. 55, 297–313. doi: 10.1037/h0056721

PubMed Abstract | Crossref Full Text | Google Scholar

Huber, W., Poeck, K., Weniger, D., and Willmes, K. (1983). Der Aachener Aphasie-Test (AAT). Hogrefe Verlag, Göttingen, Germany.

Google Scholar

Just, M. A., and Carpenter, P. A. (1971). Comprehension of negation with quantification. J. Verbal Learn. Verbal Behav. 10, 244–253. doi: 10.1016/S0022-5371(71)80051-8

Crossref Full Text | Google Scholar

Kasselimis, D., Chatziantoniou, L., Peppas, C., Evdokimidis, I., and Potagas, C. (2015). The dichotomous view on IFG lesion and non-fluent aphasia. Neurol. Sci. 36, 1687–1690. doi: 10.1007/s10072-015-2258-2

PubMed Abstract | Crossref Full Text | Google Scholar

Keenan, E. L. (2006). “Quantifiers: semantics” in Encyclopedia of Language and Linguistics. Ed. Keith Brown (Amsterdam, Netherlands: Elsevier), 302–308.

Google Scholar

Keenan, E. L., and Stavi, J. (1986). A semantic characterization of natural language determiners. Linguist Philos 9, 253–326. doi: 10.1007/bf00630273

Crossref Full Text | Google Scholar

Lange, I., Grande, M., Willmes, K., Kastrau, F., Fimm, B., Heim, S., et al. (2012). Charakteristiken der flüssigen und der nicht-flüssigen primär progressiven Aphasie. Z. Neuropsychol. 23, 7–18. doi: 10.1024/1016-264X/a000057

Crossref Full Text | Google Scholar

Le, H., and Lui, M. Y. (2022). StatPearls: aphasia. Treasure Island (FL): Treasure Island (FL).

Google Scholar

Lwi, S. J., Herron, T. J., Curran, B. C., Ivanova, M. V., Schendel, K., Dronkers, N. F., et al. (2021). Auditory comprehension deficits in post-stroke aphasia: neurologic and demographic correlates of outcome and recovery. Front. Neurol. 12:680248. doi: 10.3389/fneur.2021.680248

PubMed Abstract | Crossref Full Text | Google Scholar

Martin, A., Wiggs, C. L., Ungerleider, L. G., and Haxby, J. V. (1996). Neural correlates of category-specific knowledge. Nature 379, 649–652. doi: 10.1038/379649a0

PubMed Abstract | Crossref Full Text | Google Scholar

Mayer, J. F., and Murray, L. L. (2012). Measuring working memory deficits in aphasia. J. Commun. Disord. 45, 325–339. doi: 10.1016/j.jcomdis.2012.06.002

PubMed Abstract | Crossref Full Text | Google Scholar

McMillan, C. T., Clark, R., Moore, P., Devita, C., and Grossman, M. (2005). Neural basis for generalized quantifier comprehension. Neuropsychologia 43, 1729–1737. doi: 10.1016/j.neuropsychologia.2005.02.012

PubMed Abstract | Crossref Full Text | Google Scholar

McMillan, C. T., Clark, R., Moore, P., and Grossman, M. (2006). Quantifier comprehension in corticobasal degeneration. Brain Cogn. 62, 250–260. doi: 10.1016/j.bandc.2006.06.005

PubMed Abstract | Crossref Full Text | Google Scholar

McMillan, C. T., Coleman, D., Clark, R., Liang, T.-W., Gross, R. G., and Grossman, M. (2013). Converging evidence for the processing costs associated with ambiguous quantifier comprehension. Front. Psychol. 4:153. doi: 10.3389/fpsyg.2013.00153

Crossref Full Text | Google Scholar

Mesulam, M. M., Thompson, C. K., Weintraub, S., and Rogalski, E. J. (2015). The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia. Brain J. Neurol. 138, 2423–2437. doi: 10.1093/brain/awv154

PubMed Abstract | Crossref Full Text | Google Scholar

Milsark, G. (1977). Peculiarities of the existential construction in English. Linguist. Analysis 3, 1–29.

Google Scholar

Mirman, D., Kraft, A. E., Harvey, D. Y., Brecher, A. R., and Schwartz, M. F. (2019). Mapping articulatory and grammatical subcomponents of fluency deficits in post-stroke aphasia. Cogn. Affect. Behav. Neurosci. 19, 1286–1298. doi: 10.3758/s13415-019-00729-9

PubMed Abstract | Crossref Full Text | Google Scholar

Mirman, D., and Thye, M. (2018). Uncovering the neuroanatomy of Core language systems using lesion-symptom mapping. Curr. Dir. Psychol. Sci. 27, 455–461. doi: 10.1177/0963721418787486

Crossref Full Text | Google Scholar

Morgan, B., Gross, R. G., Clark, R., Dreyfuss, M., Boller, A., Camp, E., et al. (2011). Some is not enough: quantifier comprehension in corticobasal syndrome and behavioral variant frontotemporal dementia. Neuropsychologia 49, 3532–3541. doi: 10.1016/j.neuropsychologia.2011.09.005

PubMed Abstract | Crossref Full Text | Google Scholar

Oaksford, M., Roberts, L., and Chater, N. (2002). Relative informativeness of quantifiers used in syllogistic reasoning. Mem. Cogn. 30, 138–149. doi: 10.3758/bf03195273

PubMed Abstract | Crossref Full Text | Google Scholar

Pendleton, M. G., Heaton, R. K., Lehman, R. A., and Hulihan, D. (1982). Diagnostic utility of the Thurstone word fluency test in neuropsychological evaluations. J. Clin. Neuropsychol. 4, 307–317. doi: 10.1080/01688638208401139

PubMed Abstract | Crossref Full Text | Google Scholar

Pezzelle, S., Bernardi, R., and Piazza, M. (2018). Probing the mental representation of quantifiers. Cognition 181, 117–126. doi: 10.1016/j.cognition.2018.08.009

PubMed Abstract | Crossref Full Text | Google Scholar

Pezzelle, S., and Fernández, R. (2023). Semantic adaptation to the interpretation of gradable adjectives via active linguistic interaction. Cogn. Sci. 47:e13248. doi: 10.1111/cogs.13248

PubMed Abstract | Crossref Full Text | Google Scholar

Potagas, C., Kasselimis, D., and Evdokimidis, I. (2011). Short-term and working memory impairments in aphasia. Neuropsychologia 49, 2874–2878. doi: 10.1016/j.neuropsychologia.2011.06.013

Crossref Full Text | Google Scholar

Ralph, M. A. L., Jefferies, E., Patterson, K., and Rogers, T. T. (2017). The neural and computational bases of semantic cognition. Nat. Rev. Neurosci. 18, 42–55. doi: 10.1038/nrn.2016.150

PubMed Abstract | Crossref Full Text | Google Scholar

Ramotowska, S., Steinert-Threlkeld, S., van Maanen, L., and Szymanik, J. (2023). Uncovering the structure of semantic representations using a computational model of decision-making. Cogn. Sci. 47:e13234. doi: 10.1111/cogs.13234

PubMed Abstract | Crossref Full Text | Google Scholar

Rogalsky, C., Poppa, T., Chen, K.-H., Anderson, S. W., Damasio, H., Love, T., et al. (2015). Speech repetition as a window on the neurobiology of auditory-motor integration for speech: a voxel-based lesion symptom mapping study. Neuropsychologia 71, 18–27. doi: 10.1016/j.neuropsychologia.2015.03.012

PubMed Abstract | Crossref Full Text | Google Scholar

Schöller, A., and Franke, M. (2016). How many manys? Exploring semantic theories with data-driven computational models. Proceedings of Sinn Und Bedeutung 20, 622–639 Retrieved from. https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/285

Google Scholar

Schöller, A., and Franke, M. (2017). Semantic values as latent parameters: testing a fixed threshold hypothesis for cardinal readings of few and many, Berlin, Germany: Linguistics Vanguard. 3.

Google Scholar

Schuchard, J., Nerantzini, M., and Thompson, C. K. (2017). Implicit learning and implicit treatment outcomes in individuals with aphasia. Aphasiology 31, 25–48. doi: 10.1080/02687038.2016.1147526

PubMed Abstract | Crossref Full Text | Google Scholar

Schuchard, J., and Thompson, C. K. (2014). Implicit and explicit learning in individuals with agrammatic aphasia. J. Psycholinguist. Res. 43, 209–224. doi: 10.1007/s10936-013-9248-4

PubMed Abstract | Crossref Full Text | Google Scholar

Schwartz, M. F., Kimberg, D. Y., Walker, G. M., Faseyitan, O., Brecher, A., Dell, G. S., et al. (2009). Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia. Brain 132, 3411–3427. doi: 10.1093/brain/awp284

PubMed Abstract | Crossref Full Text | Google Scholar

Seger, C. A. (1994). Implicit learning. Psychol. Bull. 115, 163–196. doi: 10.1037/0033-2909.115.2.163

Crossref Full Text | Google Scholar

Shikhare, S., Heim, S., Klein, E., Huber, S., and Willmes, K. (2015). Processing of numerical and proportional quantifiers. Cogn. Sci. 39, 1504–1536. doi: 10.1111/cogs.12219

PubMed Abstract | Crossref Full Text | Google Scholar

Stockert, A., and Saur, D. (2017). Aphasie: eine neuronale Netzwerkerkrankung [Aphasia: a neuronal network disorder]. Nervenarzt 88, 866–873. doi: 10.1007/s00115-017-0356-5

PubMed Abstract | Crossref Full Text | Google Scholar

Stockert, A., Wawrzyniak, M., Klingbeil, J., Wrede, K., Kümmerer, D., Hartwigsen, G., et al. (2020). Dynamics of language reorganization after left temporo-parietal and frontal stroke. Brain J. Neurol. 143, 844–861. doi: 10.1093/brain/awaa023

PubMed Abstract | Crossref Full Text | Google Scholar

Sung, J. E., McNeil, M. R., Pratt, S. R., Dickey, M. W., Hula, W. D., Szuminsky, N. J., et al. (2009). Verbal working memory and its relationship to sentence-level reading and listening comprehension in persons with aphasia. Aphasiology 23, 1040–1052. doi: 10.1080/02687030802592884

Crossref Full Text | Google Scholar

Thompson-Schill, S. L., D'Esposito, M., Aguirre, G. K., and Farah, M. J. (1997). Role of left inferior prefrontal cortex in retrieval of semantic knowledge: a reevaluation. Proc. Natl. Acad. Sci. USA 94, 14792–14797. doi: 10.1073/pnas.94.26.14792

PubMed Abstract | Crossref Full Text | Google Scholar

Thompson-Schill, S. L., D'Esposito, M., and Kan, I. P. (1999). Effects of repetition and competition on activity in left prefrontal cortex during word generation. Neuron 23, 513–522. doi: 10.1016/s0896-6273(00)80804-1

PubMed Abstract | Crossref Full Text | Google Scholar

Thompson-Schill, S. L. (2003). Neuroimaging studies of semantic memory: inferring “how” from “where”. Neuropsychologia 41, 280–292. doi: 10.1016/S0028-3932(02)00161-6

PubMed Abstract | Crossref Full Text | Google Scholar

Thompson-Schill, S. L., Swick, D., Farah, M. J., D'Esposito, M., Kan, I. P., and Knight, R. T. (1998). Verb generation in patients with focal frontal lesions: a neuropsychological test of neuroimaging findings. Proc. Natl. Acad. Sci. USA 95, 15855–15860. doi: 10.1073/pnas.95.26.15855

PubMed Abstract | Crossref Full Text | Google Scholar

Turken, A. U., and Dronkers, N. F. (2011). The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses. Front. Syst. Neurosci. 5:1. doi: 10.3389/fnsys.2011.00001

PubMed Abstract | Crossref Full Text | Google Scholar

Vadinova, V., Buivolova, O., Dragoy, O., van Witteloostuijn, M., and Bos, L. S. (2020). Implicit-statistical learning in aphasia and its relation to lesion location. Neuropsychologia 147:107591. doi: 10.1016/j.neuropsychologia.2020.107591

PubMed Abstract | Crossref Full Text | Google Scholar

Vandenberghe, R., Price, C., Wise, R., Josephs, O., and Frackowiak, R. S. (1996). Functional anatomy of a common semantic system for words and pictures. Nature 383, 254–256. doi: 10.1038/383254a0

PubMed Abstract | Crossref Full Text | Google Scholar

von Fintel, K., and Keenan, E. L. (2018). Determiners, Conservativity, witnesses. J. Semant. 35, 207–217. doi: 10.1093/jos/ffx018

Crossref Full Text | Google Scholar

Wacker, A., Holder, M., Will, B. E., Winkler, P. A., and Ilmberger, J. (2002). Vergleich von Aachener Aphasie-Test, klinischer Untersuchung und Aachener Aphasie-Bedside-Test bei Hirntumorpatienten. Nervenarzt 73, 765–769. doi: 10.1007/s00115-002-1358-4

PubMed Abstract | Crossref Full Text | Google Scholar

Walker, G. M., Schwartz, M. F., Kimberg, D. Y., Faseyitan, O., Brecher, A., Dell, G. S., et al. (2011). Support for anterior temporal involvement in semantic error production in aphasia: new evidence from VLSM. Brain Lang. 117, 110–122. doi: 10.1016/j.bandl.2010.09.008

PubMed Abstract | Crossref Full Text | Google Scholar

Wei, W., Chen, C., Yang, T., Zhang, H., and Zhou, X. (2014). Dissociated neural correlates of quantity processing of quantifiers, numbers, and numerosities. Hum. Brain Mapp. 35, 444–454. doi: 10.1002/hbm.22190

PubMed Abstract | Crossref Full Text | Google Scholar

Willmes, K., Poeck, K., Weniger, D., and Huber, W. (1983). Facet theory applied to the construction and validation of the Aachen aphasia test. Brain Lang. 18, 259–276. doi: 10.1016/0093-934X(83)90020-2

Crossref Full Text | Google Scholar

Wilson, S. M., Eriksson, D. K., Yen, M., Demarco, A. T., Schneck, S. M., and Lucanie, J. M. (2019). Language Mapping in Aphasia. J. Speech Lang. Hear. Res. 62, 3937–3946. doi: 10.1044/2019_JSLHR-L-RSNP-19-0031

PubMed Abstract | Crossref Full Text | Google Scholar

Wilson, S. M., Henry, M. L., Besbris, M., Ogar, J. M., Dronkers, N. F., Jarrold, W., et al. (2010). Connected speech production in three variants of primary progressive aphasia. Brain 133, 2069–2088. doi: 10.1093/brain/awq129

PubMed Abstract | Crossref Full Text | Google Scholar

Wright, H. H., and Fergadiotis, G. (2012). Conceptualizing and measuring working memory and its relationship to aphasia. Aphasiology 26, 258–278. doi: 10.1080/02687038.2011.604304

PubMed Abstract | Crossref Full Text | Google Scholar

Wright, H. H., and Shisler, R. J. (2005). Working memory in aphasia: theory, measures, and clinical implications. Am. J. Speech Lang. Pathol. 14, 107–118. doi: 10.1044/1058-0360(2005/012)

Crossref Full Text | Google Scholar

Yourganov, G., Smith, K. G., Fridriksson, J., and Rorden, C. (2015). Predicting aphasia type from brain damage measured with structural MRI. Cortex 73, 203–215. doi: 10.1016/j.cortex.2015.09.005

PubMed Abstract | Crossref Full Text | Google Scholar

Zuber, R., and Keenan, E. L. (2019). A note on conservativity. J Semant 36, 573–582. doi: 10.1093/jos/ffz007

Crossref Full Text | Google Scholar

Keywords: quantifier, semantics, aphasia, flexibility, adaptation, feedback, learning

Citation: Reißner B, Grohmann W, Peiseler N, Pinho J, Hußmann K, Werner CJ and Heim S (2024) Quantifier processing and semantic flexibility in patients with aphasia. Front. Psychol. 15:1328853. doi: 10.3389/fpsyg.2024.1328853

Received: 27 October 2023; Accepted: 10 June 2024;
Published: 18 July 2024.

Edited by:

Maria Garraffa, University of East Anglia, United Kingdom

Reviewed by:

Prakash Mondal, Indian Institute of Technology Hyderabad, India
Marco Calabria, Open University of Catalonia, Spain

Copyright © 2024 Reißner, Grohmann, Peiseler, Pinho, Hußmann, Werner and Heim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Stefan Heim, s.heim@fz-juelich.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.