Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 26 August 2022
Sec. Psychology of Language
This article is part of the Research Topic Variability in Language Predictions: Assessing the Influence of Speaker, Text and Experimental Method View all 11 articles

Rapid adaptation of predictive models during language comprehension: Aperiodic EEG slope, individual alpha frequency and idea density modulate individual differences in real-time model updating

\nIna Bornkessel-Schlesewsky
Ina Bornkessel-Schlesewsky1*Isabella SharradIsabella Sharrad1Caitlin A. HowlettCaitlin A. Howlett2Phillip M. AldayPhillip M. Alday3Andrew W. Corcoran,Andrew W. Corcoran4,5Valeria Bellan,Valeria Bellan1,2Erica WilkinsonErica Wilkinson2Reinhold KlieglReinhold Kliegl6Richard L. Lewis,Richard L. Lewis7,8Steven L. SmallSteven L. Small9Matthias SchlesewskyMatthias Schlesewsky1
  • 1Cognitive Neuroscience Laboratory, Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, SA, Australia
  • 2Innovation, Implementation and Clinical Translation (IIMPACT) in Health, University of South Australia, Adelaide, SA, Australia
  • 3Beacon Biosignals, Boston, MA, United States
  • 4Cognition and Philosophy Laboratory, Monash University, Melbourne, VIC, Australia
  • 5Monash Centre for Consciousness and Contemplative Studies, Monash University, Melbourne, VIC, Australia
  • 6Division of Training and Movement Science, University of Potsdam, Potsdam, Germany
  • 7Department of Psychology, University of Michigan, Ann Arbor, MI, United States
  • 8Weinberg Institute for Cognitive Science, University of Michigan, Ann Arbor, MI, United States
  • 9School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, United States

Predictive coding provides a compelling, unified theory of neural information processing, including for language. However, there is insufficient understanding of how predictive models adapt to changing contextual and environmental demands and the extent to which such adaptive processes differ between individuals. Here, we used electroencephalography (EEG) to track prediction error responses during a naturalistic language processing paradigm. In Experiment 1, 45 native speakers of English listened to a series of short passages. Via a speaker manipulation, we introduced changing intra-experimental adjective order probabilities for two-adjective noun phrases embedded within the passages and investigated whether prediction error responses adapt to reflect these intra-experimental predictive contingencies. To this end, we calculated a novel measure of speaker-based, intra-experimental surprisal (“speaker-based surprisal”) as defined on a trial-by-trial basis and by clustering together adjectives with a similar meaning. N400 amplitude at the position of the critical second adjective was used as an outcome measure of prediction error. Results showed that N400 responses attuned to speaker-based surprisal over the course of the experiment, thus indicating that listeners rapidly adapt their predictive models to reflect local environmental contingencies (here: the probability of one type of adjective following another when uttered by a particular speaker). Strikingly, this occurs in spite of the wealth of prior linguistic experience that participants bring to the laboratory. Model adaptation effects were strongest for participants with a steep aperiodic (1/f) slope in resting EEG and low individual alpha frequency (IAF), with idea density (ID) showing a more complex pattern. These results were replicated in a separate sample of 40 participants in Experiment 2, which employed a highly similar design to Experiment 1. Overall, our results suggest that individuals with a steep aperiodic slope adapt their predictive models most strongly to context-specific probabilistic information. Steep aperiodic slope is thought to reflect low neural noise, which in turn may be associated with higher neural gain control and better cognitive control. Individuals with a steep aperiodic slope may thus be able to more effectively and dynamically reconfigure their prediction-related neural networks to meet current task demands. We conclude that predictive mechanisms in language are highly malleable and dynamic, reflecting both the affordances of the present environment as well as intrinsic information processing capabilities of the individual.

1. Introduction

Predictive coding (e.g., Friston, 2005, 2009) provides a compelling theory of how the human brain processes information. Within a unified account of sensation, cognition and action (e.g., Clark, 2013), it posits that the brain utilizes generative predictive models to actively infer the causes of its sensory inputs. In other words, perception involves the brain using its internal model of the world to generate predictions about expected upcoming sensory input, which are then compared to the actual incoming sensory signals. In line with the “Bayesian brain hypothesis” (e.g., Knill and Pouget, 2004; Frith, 2007; Sanborn and Chater, 2016), this is viewed as a process of (unconscious) probabilistic inference: the prior belief arising from a probabilistic generative model is combined with the sensory evidence to yield a posterior belief (the updated model). Predictions flow from higher to lower levels of a hierarchically organized cortical architecture (via feedback connections) and prediction errors are propagated up the cortical hierarchy (via feedforward connections) to engender model updates at higher levels. While predictions at “lower” levels pertain directly to specific properties of the incoming sensory information, predictions at higher levels are more abstract and can span longer timescales (Hohwy, 2013). In this highly efficient coding scheme, sensory information need only be represented to the extent that it is not predicted (Rao and Ballard, 1999). In other words, prediction errors serve as a proxy for sensory information (Feldman and Friston, 2010; Clark, 2013)1. This effectively amounts to signal compression as only the non-predicted parts of the signal need to be transmitted. Overall, the architecture strives to minimize prediction errors.

Crucially, the relative weighting of a prediction error (PE) vis-à-vis the top-down predictive model depends both on the noisiness of the signal (Clark, 2013) and the (un)certainty of the prediction (Feldman and Friston, 2010; Vilares and Kording, 2011). This is known as precision weighting: precision, which is defined as the inverse of variance, reflects the confidence or certainty associated with a belief or a sensory input (Friston, 2009; Feldman and Friston, 2010; Adams et al., 2013). For example, when the sensory evidence conflicts with a prior belief, the degree to which the prior will be shifted toward the sensory evidence in forming the posterior belief depends on the certainty vested in the sensory signal (for a useful illustration, see Figure 1 in Adams et al., 2013). Thus, high-precision (i.e., low uncertainty) prediction errors are associated with higher gain (Friston, 2009) and consequently have a more substantial impact on model updating.

In addition, previous work suggests that the top-down/bottom-up balance changes across the lifespan (Moran et al., 2014) and in non-neurotypical populations (e.g., schizophrenia; see Fletcher and Frith, 2009; Adams et al., 2013). Moran et al. (2014) show that older adults tend to weight model predictions more strongly than younger adults. This means that, when faced with unpredicted sensory input, older adults will attribute higher precision to prior beliefs vis-à-vis the sensory evidence and thereby show a lower rate of learning or model adaptation than younger adults. Moran and colleagues suggest that this protects against the overfitting of internal models to the input, thus resulting in less complex models. For positive symptoms of schizophrenia (hallucinations and delusions), by contrast, Fletcher and Frith (2009) suggest that these “are caused by an abnormality in the brain's inferencing mechanisms, such that new evidence (including sensations) is not properly integrated, leading to false prediction errors" (p.56). Using simulations, Adams et al. (2013) show that this can be understood as resulting from less precise top-down predictions, thus “rendering everything relatively surprising” (p.13), including sensations that should not be e.g., self-generated actions; see Clark (2015) for detailed discussion.

These observations suggest that different weightings of top-down (prior) and bottom-up (sensory evidence) information can be a source of individual differences in sensory processing/perceptual inference, specifically in regard to how individuals from different populations adapt their predictive models to changing environmental contingencies. With the present study, we aimed to examine whether such inter-individual differences can also be observed in young, healthy adults (i.e., within the population most typically examined in cognitive neuroscience experiments). We used language as a test domain in which to examine this hypothesis. As a means of studying model adaptation, we investigated individual differences in the extent to which language-related brain responses (the N400 event-related potential) adapt to context-specific probabilistic information (“surprisal") as determined by the experimental environment. In the following, we will first introduce prediction-related phenomena in language and how these can be couched within the predictive coding framework, before turning to a discussion of potential predictors for individual differences in predictive language processing. Finally, we introduce the present study and our hypotheses.

1.1. Prediction and predictive coding in language

Language involves a plethora of predictable information sources across a range of different levels. Here, we focus mostly on the sentence level, as this is the level of interest to the current study. When words are combined into sentences, inter-word dependencies give rise to predictability in various ways. For examples, see the Supplementary materials. Note that we use predictability here rather than prediction to make clear that we are referring to the probabilistic dependencies within the structure of language rather than any putative processing mechanisms; for overviews of probabilistic modeling in psycholinguistics, see, for example, Jurafsky (2003) and Chater and Manning (2006). Experience-based, probabilistic information sources—for example that a determiner (e.g., “the”) will at some point be followed by a noun (e.g., “apple") - can be used as priors within a predictive coding architecture. This type of approach has been implemented in computational models of language processing focusing on surprisal or other information-theoretic notions (e.g., Hale, 2006; Levy, 2008); for a recent review, see Hale (2016). The notion of surprisal, which reflects how unexpected a word is given the context in which it appears, is closely related to that of prediction errors in predictive coding. Given a sequence of words w1, w2, …, wt, the surprisal of word wt is defined as the negative logarithm of the probability of that word's occurrence, given the preceding words w1, …, wt−1:

surprisal(wt)=-logP(wt|w1,,wt-1)

Surprisal has been linked to neurophysiological correlates of language processing, particularly the N400 event-related potential (ERP) component (Frank et al., 2015; Kuperberg, 2016). There have also been explicit attempts to link speech and language processing to predictive coding architectures (e.g., Pickering and Garrod, 2007, 2013; Skipper et al., 2007; Poeppel et al., 2008; Rauschecker and Scott, 2009; Bornkessel-Schlesewsky et al., 2015b). In addition, several studies suggest that probabilistic information regarding higher-order language-related information is used to anticipate sensory input (Dikker et al., 2010; Dikker and Pylkkänen, 2011), a finding which is closely in line with the assumptions of the predictive coding framework.

Nevertheless, prediction as a concept has remained controversial in the cognitive neuroscience of language processing, particularly with regard to the N400; see Kuperberg and Jaeger (2016) for arguments in favor, and Van Petten and Luka (2012) for arguments against. One of the arguments most often used against active prediction—i.e., prediction that goes beyond the preactivation of a word through a semantic network (or similar) and specifically the explicit prediction of a single specific word—is that there is little evidence that N400 amplitude reflects the error signal resulting from a failed prediction. Rather, N400 amplitude appears to be attenuated with increasing predictability. According to Van Petten and Luka (2012), “current data suggest only that N400 amplitudes are reduced in the presence of supportive semantic context and provide little hint that amplitudes are increased when a hypothesis/expectation/prediction is disconfirmed. From our starting premise that predictions should generate both benefits and costs (on different occasions), the apparent absence of costs is problematic" (p.180). They view this as evidence that the N400 reflects (passive) preactivation rather than (active) prediction, with prediction manifesting itself in other ERP components, most notably late positivities with a frontal scalp distribution.

We contend, however, that this pattern of results for the N400 is, in fact, fully in line with the assumptions of a predictive coding model. Recall that, in the typical implementation of this type of model, only error signals are transmitted via feedforward connections because predictable sensory input is “canceled out” by top-down activity encoding the relevant predictions. Thus, a reduced signal is transmitted when the input is, to some extent, predictable. By contrast, in the absence of any predictability, the complete sensory information associated with an input item, say a word, needs to be conveyed: an entirely unpredicted/unpredictable word is associated with the largest prediction error signal. When prior context leads to a certain degree of predictability (or preactivation), prediction error is reduced. In this way, we see the attenuation of prediction errors for predicted vs. unpredictable input rather than an increased error signal for a prediction violation (again, in comparison to a context without any predictability). The pattern of N400 effects thus exactly mirrors what one would expect to observe under typical implementations of a predictive coding architecture (for detailed discussion, see Bornkessel-Schlesewsky and Schlesewsky, 2019). Indeed, predictive coding neatly accounts for the well-known observation that N400 amplitude decreases for unexpected words that match the expected word in regard to certain features (e.g., semantic category, Federmeier and Kutas, 1999) or that show a certain degree of form overlap with the expected word (e.g., via orthographic neighborhood, Laszlo and Federmeier, 2009, 2011). In these cases, some—but not all—aspects of the incoming input are explained away by the generative predictive model, thereby resulting in an error signal that is intermediary between that for a highly predictable item and an unpredictable item that does not share any features with the most expected continuation. This suggests that the N400 is a composite response that combines error signals at different levels; cf. Bornkessel-Schlesewsky and Schlesewsky (2013), Bornkessel-Schlesewsky and Schlesewsky (2019), and Frank and Willems (2017).

Bornkessel-Schlesewsky and Schlesewsky (2019) proposed that, more specifically, the N400 reflects a precision-weighted error signal. This account builds on the extensive literature linking the mismatch negativity (MMN) to prediction error processing in the auditory domain (e.g., Friston, 2005; Garrido et al., 2009; Moran et al., 2014) and, more specifically, to precision-weighted error responses (Todd et al., 2011, 2013, 2014). By varying the temporal stability of rules underlying the structure of sound sequences, Todd and colleagues showed that prediction-error-related MMN effects respond to the perceived salience of events and that this is influenced both by rule stability and by rule primacy (i.e., which rule was learned first). Bornkessel-Schlesewsky and Schlesewsky (2019) argue that the N400 reflects similar processes but for more complex stimuli—hence its longer latency in comparison to the MMN.

The claim that N400 amplitude correlates with a precision-weighted error signal is supported by several observations. Firstly, N400 effects vary across languages depending on the informativity of a particular feature (e.g., animacy) for sentence-level interpretation in that language (Bornkessel-Schlesewsky and Schlesewsky, 2019, 2020). This provides a natural link to precision weighting: recall that precision is defined as the inverse of variance and variance in the form-to-meaning mapping is clearly reduced for features that are highly informative (cf. work in the context of the Competition Model, e.g., Bates et al., 1982, 2001; MacWhinney et al., 1984). Secondly, N400 amplitude shows a further property that is expected in the context of a precision-weighted error signal account, namely a modulation by attention. As described in detail by Feldman and Friston (2010), selective attention increases the precision associated with an upcoming sensory stimulus. This can lead to an amplification of the prediction error signal. At a microcircuit level, prediction error amplification is thought to be implemented via an increased gain of error-encoding units (most likely pyramidal cells in higher cortical layers; cf. Bastos et al., 2012). Similarly, though acknowledging the vastly different level of measurement at play here, N400 amplitude for incongruent (unpredictable) vs. congruent (more predictable) words within a sentence is increased when the attentional focus on a word is increased via information structural (focus) and prosodic (accent) information (Wang et al., 2011).

1.2. Precision-weighting as a source of inter-individual differences in predictive coding and possible predictors for individual differences in language

We have already sketched out above how precision weighting of prediction errors not only serves to dynamically adapt a predictive coding architecture to the estimated uncertainties of prior expectations and sensory stimuli, but also how such an architecture provides a natural locus for inter-individual differences (e.g., in aging or, in a different manner, in schizophrenia) and that these are measurable using the MMN ERP component. On the basis of the claims by Bornkessel-Schlesewsky and Schlesewsky (2019) about the functional similarity of the MMN and N400, we would also hypothesize the presence of such differences in N400 effects during language processing. Moreover, given that precision weighting of priors and sensory information may plausibly differ between individuals, we will examine whether such differences manifest themselves even in a population typically considered to be relatively homogeneous, namely young healthy adults. In the following, we will introduce the three main measures that we used as predictors of individual differences in the current study: Idea Density, Individual Alpha Frequency and Aperiodic (1/f) Activity.

1.2.1. Idea density

Idea Density (ID; also known as Propositional Density or P-Density: Kintsch and Keenan, 1973) measures the number of ideas expressed relative to the total number of words used, as derived from written or oral text samples. Ideas are operationalised as predicates: for example, verbs, adjectives and negations are all counted as ideas. ID is thought to reflect the efficiency of linguistic information encoding (Cheung and Kemper, 1992; Kemper et al., 2001b; Iacono et al., 2009; Engelman et al., 2010; Farias et al., 2012) and longitudinal evidence shows that ID measures collected from young adults predict cognitive performance in older adulthood (Snowdon et al., 1996). As discussed by Kemper et al. (2001b), ID is not correlated with high school English or maths grades nor with level of educational attainment (see also Ferguson et al., 2014; Spencer et al., 2015). Kemper and colleagues suggest that “low P-Density in young adulthood may reflect suboptimal neurocognitive development, which, in turn, may increase susceptibility to age-related decline due to Alzheimer's or other diseases" (Kemper et al., 2001a, p.602). ID is relatively stable across the adult lifespan but declines in older adulthood (for results from a large-scale study involving texts from over 19,000 respondents, see Ferguson et al., 2014).

Given the link between ID and efficiency of linguistic information encoding, we hypothesized that ID may provide a proxy for the quality of an individual's language model—our rationale being that efficient encoding requires high-quality linguistic representations. If this is indeed the case, high-ID individuals will have a higher precision language model than low-ID individuals and may thus weight model-based predictions more strongly than unexpected input information in the case of a prediction error. This could entail that high-ID individuals adapt their predictive language models more slowly to local contextual affordances than low-ID individuals, in a similar manner to the slower model updating by older adults reported by Moran et al. (2014).

1.2.2. Individual alpha frequency

Evidence is accruing that perception and cognition are discrete rather than continuous (VanRullen, 2016). We perceive the world by discretely sampling sensory input. In the brain, sampling corresponds to oscillations: fluctuations between states of high and low neuronal receptivity, which are coordinated between neurons and neural assemblies to optimize communication between them (Buzsáki and Draguhn, 2004; Fries, 2005). Importantly, the speed of oscillatory activity differs between individuals. In particular, the peak frequency of the dominant alpha rhythm of the human EEG (~8–13 Hz) varies between approximately 9 and 11.5 Hz in young adults (Klimesch, 1999). This variation in individual alpha frequency (IAF) is a trait-like characteristic (Grandy et al., 2013b), which shows high heritability (Posthuma et al., 2001; Smit et al., 2006) and test-retest reliability (Gasser et al., 1985; Kondacs and Szabó, 1999). IAF variability has ramifications not only for the alpha band, but also for the adjacent theta (~4–7 Hz) and beta (~15–30 Hz) rhythms. Consequently, IAF determines an individual's sensory sampling rate and this has consequences for the resolution with which sensory input is analyzed and represented. Samaha and Postle (2015) recently reported a compelling demonstration of this relation for the visual modality. They presented participants with two visual flashes in rapid succession and manipulated the inter-stimulus interval (ISI) between them. At very short ISIs, the two visual stimuli fuse into a single percept. Crucially, inter-individual variability in the two-flash-fusion-threshold was correlated with IAF; for a related demonstration of IAF being causally related to the length of the temporal window within which multimodal stimuli are integrated with one another, see Cecere et al. (2015).

In addition to correlating with the resolution of sensory sampling, IAF is associated with a range of higher cognitive abilities. High-IAF individuals process information more quickly (Surwillo, 1961, 1963), and perform better on memory tasks (Klimesch, 1999) and general intelligence measures (g) (Grandy et al., 2013a). For a different result see Ociepka et al. (2022), who found a relationship between IAF and processing speed but not between IAF and general intelligence. IAF decreases with age from young adulthood onwards (Köpruner et al., 1984; Klimesch, 1999), thus accompanying the well-known decline of many cognitive abilities in older adulthood (e.g., Hedden and Gabrieli, 2004; Salthouse, 2011). Previous work also indicates that language processing and language learning strategies differ between high- and low-IAF individuals (Bornkessel et al., 2004; Bornkessel-Schlesewsky et al., 2015a; Kurthen et al., 2020; Nalaye et al., 2022).

On account of its link to the rate of sensory sampling, we hypothesized that IAF may serve as a proxy for the general quality (i.e., resolution, signal-to-noise ratio) of the sensory input, which, in turn, influences more complex aspects of information processing. If true, this would mean that incoming sensory information is associated with a higher precision for high-IAF individuals in comparison to low-IAF individuals. In the case of a prediction error, high-IAF individuals may thus weight unexpected input information more strongly vis-à-vis model predictions than low-IAF individuals. Consequently, high-IAF individuals may adapt their predictive language models more quickly to local contextual affordances than low-IAF individuals.

1.2.3. Aperiodic (1/f) activity

Complementing the examination of individual differences in oscillatory neural activity (e.g., via IAF), a growing body of literature has begun to investigate the possible role of individual differences in non-oscillatory (aperiodic) brain activity. Aperiodic activity follows a P~1/fβ power law (He, 2014), where P corresponds to power, f to frequency and β is the so-called “power-law exponent.” This overall relationship of lower frequencies in the human EEG being associated with higher amplitudes (power) than higher frequencies has long been recognized. Only more recently, however, has it become clear that the power law exponent parameter—which governs the steepness of the power decrease with increasing frequency—changes dynamically depending on a variety of factors including age and task, as well as an individual's cognitive state (e.g., He, 2014; Voytek et al., 2015; Donoghue et al., 2020). In addition to potentially being clinically relevant (He, 2014), this variability may also reveal individual differences in cognitive processing in healthy individuals. For example, Ouyang et al. (2020) reported that, when both aperiodic (1/f) slope and alpha activity were taken into account, aperiodic slope rather than alpha activity predicted individual differences in processing speed for an object recognition task. These authors thus suggest that previous observations of an association between alpha activity and processing speed may have been due to a confound between oscillatory and aperiodic activity in earlier analyses (cf. also Donoghue et al., 2020). In the domain of language processing, Dave et al. (2018) recently observed a modulation of prediction-related N400 effects by 1/f slope such that a steeper slope predicted more pronounced N400 effects. Further, Cross et al. (2022) found that the learning of certain types of grammatical rules in an artificial language is likewise predicted by inter-individual variability in 1/f slope.

Regarding potential mechanisms underlying the effects of aperiodic slope on cognitive processing, one prominent approach posits that steepness of the aperiodic slope reflects the degree of neural noise (Voytek et al., 2015). Specifically, highly synchronous neural spiking (equated with “lower neural noise") is thought to correlate with a steeper 1/f slope, while more asynchronous or aberrant firing (equated with “higher neural noise”) is associated with a flatter slope (Buzsáki et al., 2012; Voytek and Knight, 2015). This notion of neural noise may, in turn, be associated with the balance between excitatory and inhibitory activity within neural networks (e.g., Gao et al., 2017). As Voytek et al. (2015) show, aging is associated with a flattening of the 1/f slope and this physiological change may underlie effects of cognitive aging such as a slowing of processing speed.

It is important to acknowledge that, in the context of aperiodic activity estimates obtained from scalp EEG, any inferences drawn about individual differences in neural noise are indirect and must be viewed with a certain degree of caution. Nevertheless, we believe that the existing literature supports an association between scalp-recorded aperiodic slope estimates and neural noise, albeit indirectly. Freeman and Zhai (2009) successfully simulated 1/f slopes obtained from intracranial EEG via a computational model of mutual excitation among pyramidal cells. They concluded that “variation in the observed slope is attributed to variation in the level of the background activity that is homeostatically regulated by the refractory periods of the excitatory neurons” (Freeman and Zhai, 2009, p.97). Voytek et al. (2015) in turn demonstrated that 1/f slope and age show a similar relationship in intracranial and scalp EEG measures, thus supporting the association between scalp-recorded 1/f slope and neural noise.

In the context of the current study, we will examine the proposal by Dave et al. (2018) that more synchronous neural networks—as reflected in a steeper aperiodic slope—are associated with stronger predictive processing. If this proposal holds, we should observe a stronger reliance on top-down predictive models for individuals with a steeper 1/f slope and, consequently, a potentially slower adaptation of internal predictive models to current contextual affordances than for individuals with a shallower 1/f slope.

1.3. The present study

The present study examined how ID, IAF and aperiodic activity are related to prediction error signals in language processing. In Experiment 1, participants listened to 150 short passages (approximately 5 sentences in length) while their EEG was recorded. An example passage is given below:

Example of the passages presented to participants in the current study:

Florence was enjoying her long-awaited holiday in Singapore with her close friends. One of the activities she was most looking forward to was visiting the zoo, where she had the opportunity to ride a huge gray elephant. Although standing in the warm humid air was dreadful, being waved to through the enclosure by the zookeeper brought a smile to her face.

The critical passages (60%, i.e., 90 of 150) each contained 2 two-adjective noun phrases (marked in bold in the example above), which could either have an expected (canonical) or unexpected (non-canonical) order (e.g., canonical: “the huge gray elephant”; non-canonical: “the gray huge elephant"; for seminal work on ERP correlates of adjective order variations, see Kemmerer et al., 2007). With this manipulation, we intended to elicit prediction error responses due to the unexpectedness of the non-canonical adjective orders. In addition, we varied the probability of encountering non-canonical adjective orders by means of a speaker manipulation. Specifically, passages were recorded by two male speakers with varying probabilities of canonical orders. Thus, for the “canonical” speaker, approximately 70% of the critical 180 two-adjective noun phrases were presented to participants in canonical order, while for the “non-canonical” speaker, only approximately 30% of adjectives were canonically ordered.

Building on the proposal that N400 amplitude reflects precision-weighted prediction error signals (Bornkessel-Schlesewsky and Schlesewsky, 2019), our primary outcome variable was the amplitude of the N400 event-related potential at the position of the critical second adjective within the two-adjective noun phrases embedded in our passages.

Through our experimental design, we aimed to examine inter-individual differences in the processing of prediction errors elicited by the non-canonical adjective orders. We used IAF, ID and aperiodic activity (1/f) as our primary predictors of individual differences as outlined above but also collected an additional battery of cognitive and linguistic tests (see the Methods section for further details). Furthermore, we included the speaker manipulation as an additional manipulation of prediction precision. Here, our rationale was that the high number of non-canonical adjective orders produced by the non-canonical speaker would call for adaptation of participants' existing language model, according to which a non-canonical order of two adjectives should be unexpected (cf. the notion of “active listening” put forward by Friston et al., 2021). Participants who adapt more quickly to the contingencies of the current input— i.e., more readily adapt their established predictive model in the face of prediction errors—should thus be expected to show N400 responses aligned with the experimental environment rather than their global language experience. As described above, we tentatively hypothesized that this readiness to adapt might be more pronounced in high-IAF and low-ID individuals on account of the high precision of the sensory input or low precision of the predictive language model, respectively. Individuals with a steep 1/f slope were expected to show a similar pattern to individuals with a high ID (i.e., slower model adaptation) on account of the link that has been postulated between lower neural noise (associated with a steeper 1/f slope) and stronger predictive processes (Dave et al., 2018). In spite of these hypotheses, this was an exploratory study given the complexity of the domain under examination and the fact that this research question has not yet been examined to date—neither in the area of language nor with respect to other cognitive domains.

Given the novelty of the research question, we also report a follow-up experiment with a similar experimental design (Experiment 2), in which we examined whether the results of Experiment 1 could be replicated.

2. Experiment 1

2.1. Methods

2.1.1. Participants

Forty-five young adults (31 female; mean age: 22.9 years, sd: 3.9, range: 18–33) participated in Experiment 1. Participants were right-handed as assessed by the Edinburgh handedness inventory (Oldfield, 1971), native speakers of English who had not learnt another language prior to starting school. They reported having no diagnosis of neurological or psychiatric conditions, normal hearing and normal or corrected-to-normal vision. The experimental protocol was approved by the University of South Australia's Human Research Ethics Committee (protocol number 36348).

2.1.2. Materials

The critical materials for this experiment were 90 short passages (approximately 5 sentences in length), each of which contained two critical two-adjective noun phrases (NPs; e.g., “a huge gray elephant”). Critical NPs occurred at different positions in each passage so that their occurrence would not be predictable. The order of the prenominal adjectives was manipulated such that, in some cases, they adhered to the expected sequence of “value > size > dimension > various physical properties > color” (Kemmerer et al., 2007, p.240). We will refer to adjective orders adhering to this sequencing as canonical (C) in what follows and to those that do not as non-canonical (N). Passages were recorded by two male speakers of Australian English with the probability of adjectives in the critical NPs occurring in a canonical or a non-canonical order manipulated across speakers. Thus, when listening to the passages, participants were exposed to one speaker (henceforth: the canonical speaker) who produced more canonical than non-canonical orders (C:N ratio of 69%:31%) and another speaker (henceforth: the non-canonical speaker) who produced more non-canonical orders (C:N ratio of 31%:69%). To counterbalance the assignment of speakers to passages, we constructed two versions of the critical materials. Thus, canonicity of speaker varied both within subjects and within items, but the (non-canonical vs. canonical) speaker assignment was fixed throughout the course of each session. The distribution of canonical and non-canonical orders across speakers, versions and passages is shown in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Counterbalancing of canonical and non-canonical adjective orders across versions.

Each participant listened to the critical passages from one of the two versions interspersed with 60 filler passages in a pseudo-randomized order. The filler passages included a separate experimental manipulation involving passive sentences and relative clauses and did not contain any two-adjective noun phrases. Thus, every participant was presented with 150 passages in total.

To ensure that participants were listening attentively, they were presented with yes-no comprehension questions after approximately 1/3 of passages. An example comprehension question for the passage example above is: “Did the zookeeper wave at Florence?” (correct answer = yes).

2.1.3. Language models

The principal aim of the present study was to examine how individuals differ in the adaptation of their predictive models to the current environment during language processing. To this end, we focused on the processing of the second adjective (ADJ2) in the critical 2-adjective NPs embedded in the passages. We used bigram-based surprisal to quantify predictability of ADJ2 in the context of the preceding adjective. To allow us to estimate predictability at the level of adjective classes, we first established adjective clusters for our materials. This was accomplished using the following procedure, which was implemented in R (R Core Team, 2021) using the tidyverse (Wickham et al., 2019) and tidymodels (Kuhn and Wickham, 2020) collections of packages as well as the packages tidytext (Silge and Robinson, 2016) and widyr (Robinson, 2021). For package version numbers, please see the analysis scripts provided with the raw data (see Data Availability Statement).

Procedure for determining adjective clusters and calculating cluster-based surprisal:

1. We used pre-derived word vectors from van Paridon and Thompson (2021) to determine similarities between adjectives. Word vectors, also known as word embeddings, provide a numerical representation of word meaning. They are created by machine learning models, which learn lexical relationships from word co-occurrences in large text corpora. For a recent example of how word vectors may serve as useful representations of word meaning when investigating human language processing, see Pereira et al. (2018). Here, we used Van Paridon and Thompson's top 1 million vectors from a combined Wikipedia and Open Subtitles corpus.

2. To reduce dimensionality, we performed a principal components analysis (PCA), thus reducing the 300 vectors from van Paridon and Thompson (2021) to 5 principal components (PCs).

3. Six adjective clusters were identified on the basis of the PCs using k-means clustering. The value of k=6 was selected via visual inspection of the total within-cluster sum of squares. Three of the six clusters are visualized in Figure 1 and a full list is provided in the Supplementary materials for Experiment 1.

4. Cluster-based unigram and bigram frequencies were computed as cluster-based sums of unigram and bigram counts from the Open Subtitles corpus for English (751 million words) as made available by van Paridon and Thompson (2021). From these, surprisal values for adjective 2 (ADJ2) in the context of adjective 1 (ADJ1) were calculated as:

surprisal(ADJ2)=-log(ClusterBigramFrequency(ADJ1ADJ2)ClusterUnigramFrequency(ADJ1)

Here, ClusterBigramFrequency(ADJ1ADJ2) refers to the frequency with which two-adjective bigrams comprising a first adjective belonging to the cluster of ADJ1 and a second adjective belonging to the cluster of ADJ2 occur in the Open Subtitles corpus. ClusterUnigramFrequency(ADJ1) refers to the frequency with which adjectives belonging to the cluster of ADJ1 occur in Open Subtitle corpus. In the remainder of the paper, we will refer to these corpus-based surprisal values as global surprisal.

FIGURE 1
www.frontiersin.org

Figure 1. Three adjective clusters produced by the current clustering procedure for Experiment 1. Clusters are visualized with regard to their variability on principal components PC1 and PC2. Note, for example, how the clustering procedure distinguishes color adjectives from other adjective types.

In a second step, we computed incremental surprisal for ADJ2 within the experimental context to be able to track how listeners' expectations change as a function of being exposed to the experimental environment. To track surprisal incrementally over the course of the experiment, we calculated the NP-by-NP cumulative intra-experimental frequencies for the ADJ1-ADJ2 bigram cluster and the ADJ1 unigram cluster and then computed surprisal as described above. This was done separately for each speaker, thus allowing us to examine to what extent participants' expectations adapted to the distributional properties of each of the two speakers within the experiment. We henceforth refer to this speaker-based measure of intra-experimental surprisal as speaker-based surprisal. Using speaker-based surprisal, we aimed to examine how participants' N400 responses—as an assumed proxy for precision-weighted prediction error signals— were modulated by the exposure to adjective order variations throughout the course of the experiment and by each speaker.

Corpus-based word (unigram) frequencies for ADJ2 were included in all analyzes as a control variable. These were taken from the same unigram corpus as used for global surprisal calculation above and log-transformed prior to inclusion in the analysis.

2.1.4. Behavioral individual differences measures

2.1.4.1. Idea density (ID)

Participants provided a written text sample in response to the prompt “Describe your favorite game.” This corresponds to the Essay Composition task of the Wechsler Individual Achievement Test—Australian and New Zealand Standardized, Third Edition (WIAT-III A&NZ; Pearson Clinical). From this text, we calculated ID using the automated Computerized Propositional Idea Density Rater (CPIDR; Brown et al., 2008).

2.1.4.2. Cognitive tests

Participants completed an additional battery of cognitive tests. These included:

• The two-subtest version of the Wechsler Abbreviated Scale of Intelligence—Second Edition (WASI-II; Pearson Clinical), comprising Vocabulary and Matrix reasoning tasks

• Three additional language-related subtests from the WIAT-III, namely Oral Word Fluency, Sentence Repetition and Sentence Composition

• A reading-span task (Daneman and Carpenter, 1980)

In accordance with our hypotheses, we focus on ID and the resting state EEG-based individual differences metrics (1/f slope and Individual Alpha Frequency; see below) as our primary measures of individual differences for the purposes of the present study.

2.1.5. Procedure

Participants completed two in-lab testing sessions: (1) a behavioral session comprising the cognitive tests/text sample production, and (2) an EEG session comprising the collection of resting-state EEG recordings as well as the main language comprehension task. Sessions were either completed on the same day, separated by a break (approximately 30 min), or on 2 days (with the second session completed within 7 days of the first session).

2.1.5.1. Behavioral session

In the behavioral session, after the consent process, participants completed a questionnaire to provide demographic, language and well-being details. They subsequently completed the cognitive tests as described above. The behavioral session took maximally 1.5 h to complete.

2.1.5.2. EEG session

In the EEG session, participants were fitted with an EEG cap and underwent a 2-min eyes-open and 2-min eyes-closed resting state EEG recording prior to commencing the main task. For the main task, each trial commenced with the 500 ms presentation of a fixation asterisk in the center of a computer screen, after which the auditory presentation of a passage commenced via loudspeakers. After the auditory passage was complete, the fixation asterisk remained on screen for another 500 ms. Subsequently, participants were presented with a comprehension question in approximately 1/3 of all trials, to which they responded with “yes" or “no" using two buttons on a game controller. The assignment of “yes” and “no” responses to the left and right controller buttons was counterbalanced across participants and the maximal response time was set at 4,000 ms. For trials without a comprehension question, participants were asked to “Press the YES key to proceed.” Following the participant's response or after the allocated response time had elapsed, the next trial commenced after an inter-trial interval of 1,500 ms. Participants were asked to avoid any movements or blinks during the presentation of the fixation asterisk if possible.

Note that, as the intermittent comprehension questions only served to ensure that participants listened attentively, comprehension data was not analyzed in the present paper. Log files for the comprehension task are, however, provided with the raw data for the experiment (see Data Availability statement).

The 150 passages were presented in 5 blocks, between which participants took short self-paced breaks. Prior to commencing the main task, participants completed a short practice session. After the main task, the resting state recordings were repeated. Overall, the EEG session took approximately 3 h including electrode preparation and participant clean-up.

2.1.6. EEG recording and preprocessing

The EEG was recorded from 64 electrodes mounted inside an elastic cap (Quik-CapEEG) using a Neuroscan Synamps2 amplifier (Compumedics Neuroscan, Abbotsford, VIC, Australia). The electrooculogram (EOG) was recorded via electrodes placed at the outer canthi of both eyes as well as above and below the left eye. The EEG recording was sampled at 1,000 Hz and referenced to the right mastoid.

Data preprocessing was undertaken using MNE Python version 0.23.0 (Gramfort et al., 2013, 2014). EEG data were re-referenced to an average reference and downsampled to 500 Hz prior to further processing. EOG-artifacts were corrected using an ICA-based correction procedure, with independent components (ICs) found to correlate most strongly with EOG events (via the create_eog_epochs function in MNE) excluded. Raw data were filtered using a 0.1—30 Hz bandpass filter to exclude slow signal drifts and high frequency noise. Epochs were extracted in a time window from –200 to 1,000 ms relative to critical word (ADJ2) onset and mean single-trial amplitudes were extracted for the prestimulus (–200 to 0 ms) and N400 (300–500 ms) time windows using the retrieve function from the philistine Python package (Alday, 2018).

2.1.6.1. Resting-state EEG-based individual differences measures: Individual alpha frequency (IAF) and aperiodic (1/f) activity

IAF and aperiodic slope estimates were calculated from participants' eyes-closed resting-state recordings.

To calculate IAF, we used a Python-based implementation (Alday, 2018) of the procedure described in Corcoran et al. (2018) and drawing on electrodes P1, Pz, P2, PO3, POz, PO4, O1, Oz and O2. We estimated both peak alpha frequency (PAF) and center of gravity (COG) measures (cf. Corcoran et al., 2018, for discussion) and calculated the mean of pre and post estimates by participant for each measure. For participants who did not have estimable IAF values for one of the two recording sessions, their IAF estimate from the other session was used as their overall IAF metric. This was the case for 3 participants in Experiment 1.

Aperiodic (1/f) intercept and slope estimates were calculated in Python using the YASA toolbox (Vallat and Walker, 2021). YASA implements the irregular-resampling auto-spectral analysis (IRASA) method for separating oscillatory and aperiodic activity (Wen and Liu, 2016). As for IAF, by-participant intercept and slope estimates were computed as means of pre and post resting-state recordings from electrodes F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, O1, Oz, and O2.

2.1.7. Data analysis

The data analysis was undertaken using R (R Core Team, 2021) and Julia (Bezanson et al., 2017). We used R for data pre- and post-processing. For data import and manipulation, we used the tidyverse collection of packages (Wickham et al., 2019) as well as the vroom package (Hester and Wickham, 2021). Figures were created using ggplot2 (Wickham, 2016; Wickham et al., 2021) as well as the packages cowplot (Wilke, 2021) and patchwork (Pedersen, 2020). All figures with color employ the Okabe Ito color palette from colorblindr (McWhite and Wilke, 2021). Other packages used include corrr (Kuhn et al., 2020), kableExtra (Zhu, 2021) and here (Müller, 2020). For package version numbers, please see the analysis scripts provided with the raw data (see Data Availability Statement). For R, see the html outputs in the src/subdirectory; for Julia see the Manifest.toml file.

EEG data were analyzed using linear mixed effects models (LMMs) with the MixedModels.jl package in Julia (Bates et al., 2021). We used the JellyMe4 package (Alday, 2021) to move model objects from Julia to R for visualization purposes.

For the ERP data, we examined single-trial N400 amplitude as our outcome variable of interest. To this end, we analyzed mean EEG voltage 300–500 ms post onset of the critical second adjective (ADJ2) in a centro-parietal region of interest (C3, C1, Cz, C2, C4, P3, P1, Pz, P2, P4, CP3, CP1, CPz, CP2, CP4).

2.1.7.1. Linear mixed modeling approach

We adopted a parsimonious LMM selection approach (Bates et al., 2015; Matuschek et al., 2017), which seeks to identify LMMs that are supported by the data and not overparameterized. Model selection was undertaken without consideration of fixed-effects estimates (i.e., without consideration of which fixed effects reached significance).

Fixed effects initially included log-transformed unigram frequency, speaker-based surprisal, adjective order canonicity, epoch (as a proxy for how long participants had been exposed to the experimental stimuli), mean prestimulus amplitude and their interactions. Prestimulus amplitude (–200 to 0 ms) was included as a predictor in the model as an alternative to traditional EEG baselining (see Alday, 2019). The categorical factor canonicity was encoded using sum contrasts (cf. Schad et al., 2020; Brehm and Alday, 2022); thus, model intercepts represent the grand mean. All continuous predictors were z-transformed prior to being included in the models.

Although not of interest within the scope of the current paper, we modeled the main effect of prestimulus amplitude with a second-order and the main effect of speaker-based surprisal with a third-order polynomial trend. The inclusion of these higher-order trends was supported by the data, significantly improved model fit, and guarded against the interpretation of spurious interactions of their linear trends with other fixed effects (Matuschek and Kliegl, 2018). Non-significant higher-order interactions involving fixed effects were removed from the model when they were not part of the theoretical expectations and this did not lead to a significant reduction in goodness of model fit as assessed via likelihood-ratio tests (LRTs).

The random-effect (RE) structure was selected in two steps, again using LRTs to check improvement in goodness of fit and random-effects PCA (rePCA) to guard against overparameterization during model selection. The results of the first step led to a RE structure with variance components for grand means, prestimulus amplitude and prestimulus amplitude (2nd order) by subject, item and channel. In a second step, we added by-subject and by-item variance components for effects of canonicity, epoch, unigram frequency and speaker-based surprisal to the RE structure. Correlation parameters were not significant for the by-subject and by-item variance components and constrained to zero.

Using the speaker-based surprisal LMM (as described above) as a reference, we added, in turn, fixed-effect covariates for individual differences in (1) 1/f slope, (2) IAF (peak alpha frequency), and (3) ID to the model to check the extent to which they moderate/modulate adaptation to speaker-based surprisal. In each of these three additional LMMs, adding the respective individual differences covariate as a by-item variance component significantly improved the goodness of model fit.

The model selection procedure is transparently documented in Julia scripts in the Open Science Framework repository for this paper (see Data Availability Statement).

2.1.7.2. Reporting and visualization of results

As our primary research question was how listeners adapt their predictive models to the experimental context, we focus on interactions of speaker-based surprisal and epoch in the interpretation of our results. Thus, for each LMM, we focus on the highest order interaction(s) including these predictors and the current individual-differences predictor of interest where relevant. These are reported, visualized and interpreted in the main text. Model summaries are included in the Supplementary materials, with only significant effects reported in the model summary tables to increase readability. For full model summaries including all effects, see the repository for the paper. For the visualization of effects, we used the broom.mixed package (Bolker and Robinson, 2021) to extract fitted values and the remef package (Hohenstein and Kliegl, 2021) to extract partial effects. By visualizing partial effects, we focus on the effects of interest while adjusting for additional model parameters that are not of primary interest here where appropriate.

2.2. Results

2.2.1. Individual differences measures

Distributions of the (z-transformed) individual differences measures are shown in Supplementary Figure S1.

2.2.2. EEG data

2.2.2.1. Sanity check analysis

In a first step, we ran a “sanity check” analysis to determine whether the current data showed expected modulations of N400 amplitude by unigram frequency and global (corpus-based) surprisal defined at the level of adjective clusters (see section on language models above). For this, we followed the general modeling strategy outlined in the Data analysis section above, but including global surprisal rather than speaker-based surprisal.

The sanity check analysis confirmed the expected effects of word frequency and surprisal on N400 amplitude. At the position of the critical second adjective, N400 amplitudes were higher for words with a lower frequency of occurrence and for words with higher corpus-based surprisal values. These effects are visualized in Figure 2 (see Supplementary Table S1 for the model summary). As is apparent from the model summary, there was a significant interaction of Unigram Frequency x Global Surprisal x Prestimulus amplitude (Estimate = 0.0497, Std. Error = 0.0203, z = 2.45, p = 0.01). However, as we were only interested in general trends for word frequency and global surprisal for the purposes of our sanity check, we visualize the partial effects of these two predictors adjusted for the other predictors.

FIGURE 2
www.frontiersin.org

Figure 2. Sanity check analysis for Experiment 1. Panel A shows the relationship between N400 amplitude and (log-transformed) unigram frequency, while Panel B shows the relationship between N400 amplitude and global (corpus-based) surprisal, as defined using bigrams at the adjective cluster level. Both unigram frequency and surprisal values were z-transformed. Shaded areas indicate 95% confidence intervals.

2.2.2.2. N400 amplitude attunes to speaker-based surprisal over the course of the experiment

The speaker-based surprisal model (see Supplementary Table S2 for the model summary) revealed an interaction of Speaker-based Surprisal x Epoch x Canonicity x Prestimulus Amplitude (Estimate = −0.0693, Std. Error = 0.0173, z = −4.01, p < 0.0001). Figure 3 visualizes the partial effect of Speaker-based Surprisal x Epoch x Canonicity, adjusted for Prestimulus Amplitude. As is apparent from the figure, the effect of speaker-based surprisal becomes stronger over the course of the experiment, i.e., the longer participants are exposed to the peculiarities of each speaker, the stronger the effect of speaker-based surprisal on N400 amplitude. This supports our assumption that listeners attune their internal predictive models to the current context. Strikingly, the effect of speaker-based surprisal overrides the effect of adjective order canonicity by the end of the experiment [cf. Alday et al. (2017) for the finding that language-related EEG responses adapt to the local context within a story].

FIGURE 3
www.frontiersin.org

Figure 3. Changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 1 for canonical (C) and non-canonical (N) adjective orders. The figure visualizes partial effects as calculated using the remef package, adjusted for prestimulus amplitude. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor.

2.2.2.3. Inter-individual differences in predictive model adaptation

Having determined that effects of speaker-based surprisal (z-transformed) on N400 amplitude became stronger over the course of the experiment, we next sought to examine how individuals differed with regard to this adaptation process and which of our metrics best predicted these assumed individual differences. To this end, we in turn added each of our individual differences metrics of interest—individual alpha frequency (IAF), aperiodic (1/f) slope and idea density (ID)—to the speaker-based surprisal model without individual differences. As revealed by likelihood ratio tests and goodness-of-fit metrics, all of these models showed an improved fit to the data over the base model without an individual differences predictor. Table 2 provides an overview of the goodness-of-fit metrics, demonstrating that all models including individual differences covariates outperform the model without individual differences in terms of AIC. With the exception of the IAF model, this also holds for BIC.

TABLE 2
www.frontiersin.org

Table 2. Model comparison for the models including speaker-based surprisal in Experiment 1.

In line with our primary research question, for the interpretation of the individual differences results, we focus on the top-level interaction(s) involving Speaker-based Surprisal, Epoch and the individual differences predictor of interest (cf. the discussion of our LMM modeling approach in the Data Analysis section).

For the model including aperiodic slope, the top-level interaction was Prestimulus Amplitude x Speaker-based Surprisal x Epoch x Canonicity x Frequency x Slope (Estimate = 0.1531, Std. Error = 0.0780, z = 1.96, p < 0.05). For the model including IAF, it was Speaker-based Surprisal x Epoch x Canonicity x Frequency x IAF (Estimate = –0.0505, Std. Error = 0.0172, z = –2.94, p < 0.01). The ID model showed an interaction of Prestimulus Amplitude x Speaker-based Surprisal x Epoch x Canonicity x ID (Estimate = 0.0821, Std. Error = 0.0163, z = 5.03, p < 0.0001). In view of the complexity of these models and the fact that our primary interest for the purposes of the present paper lies in examining how adaptation to speaker-based surprisal is modulated by these individual differences metrics, we visualize partial effects of Speaker-based Surprisal x Epoch x Individual Differences Covariate of Interest for each model in turn in the following, adjusting for any additional moderating effects. For model summaries, see Supplementary Tables S3S5.

Figure 4 visualizes how the intra-experimental adaptation to speaker-based surprisal is modulated by aperiodic slope. It demonstrates that, though N400 responses had attuned to speaker-based surprisal for all participants by the end of the experiments (mirroring the effects observed in Figure 3), individuals with a steep aperiodic slope adapt most rapidly to intra-experimental contingencies (cf. the pattern of N400 responses in the middle portion of the experiment).

FIGURE 4
www.frontiersin.org

Figure 4. Effect of aperiodic (1/f) slope on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 1. The figure visualizes partial effects as calculated using the remef package, adjusted for prestimulus amplitude, canonicity and frequency. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised (into beginning, middle, end) for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for 1/f slope, which is trichotomised into steep, medium and shallow for visualization purposes but was entered into the statistical model as a continuous predictor. Shaded areas indicate 95% confidence intervals.

Figures 5, 6 show the adaptation to speaker-based surprisal as moderated by IAF and ID, respectively. For IAF, it is apparent that adaptation is quickest for individuals with a low IAF. At a first glance, the pattern is similar for ID, i.e., low-ID individuals show a more rapid adaptation to speaker-based surprisal. However, it is notable that individuals with a high ID show the most pronounced change in the pattern of speaker-based surprisal N400 effects over the course of the experiment, demonstrating a slight “anti surprisal" effect at the beginning of the experiment but adapting to show the expected attunement to speaker-based surprisal by the end.

FIGURE 5
www.frontiersin.org

Figure 5. Effect of individual alpha frequency (IAF) on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 1. The figure visualizes partial effects as calculated using the remef package, adjusted for canonicity and frequency. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for IAF, which is trichotomised into low, medium and high for visualization purposes but was entered into the statistical model as a continuous predictor. Shaded areas indicate 95% confidence intervals.

FIGURE 6
www.frontiersin.org

Figure 6. Effect of idea density (ID) on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 1. The figure visualizes partial effects as calculated using the remef package, adjusted for prestimulus amplitude and canonicity. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into thirds (beginning, middle, end) for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for ID, which is trichotomised into low, medium and high for visualization purposes but was entered into the statistical model as a continuous predictor. Shaded areas indicate 95% confidence intervals.

2.3. Discussion

Experiment 1 examined N400 ERP responses to investigate how, during naturalistic language processing, individuals update their internal predictive models to reflect current contextual or environmental information. While listening to short passages recorded by two speakers of Australian English, participants showed an adaptation to experiment- and speaker-specific adjective order patterns with increasing exposure to these patterns over the course of the experiment. By the end of the experiment, N400 responses at the position of the critical second adjective (ADJ2) in two-adjective noun phrases embedded in the passages had attuned to speaker-based surprisal. In other words: N400 amplitude reflected the (information-theoretic) surprisal for encountering an adjective of type ADJ2 following an adjective of the type encountered at the ADJ1 position, given the speaker reading the passage. Adjective type was defined using a word-vector-based clustering procedure and speaker-based surprisal was defined incrementally via the participant's prior exposure to two-adjective noun phrases for a particular speaker at each point over the course of the experiment. N400 attunement to speaker-based surprisal led to an alignment of N400 amplitudes for canonical and non-canonical adjective orders by the end of the experiment. It is important to keep in mind, however, that these measures (i.e., adjective clusters and surprisal) were correlations rather than experimental manipulations.

In addition, we observed inter-individual differences in regard to the strength of N400-attunement to speaker-based surprisal. All three individual differences predictors examined—aperiodic (1/f) slope, Individual Alpha Frequency (IAF) and Idea Density (ID)—led to improvement of mixed model fit over the best model not including individual differences predictors. Individuals with a steep aperiodic slope, which is thought to reflect low neural noise, showed the most pronounced and earliest attunement to speaker-based surprisal. A similar pattern was observed for individuals with a low IAF. For ID, the pattern was somewhat more mixed: while low-ID individuals appeared to show an earlier attunement to speaker-based surprisal, high-ID individuals showed a more substantial change of speaker-surprisal-related response from the beginning to the end of the experiment. These findings were examined further in Experiment 2.

3. Experiment 2

3.1. Methods

In view of the exploratory nature of the current study and the novel results of Experiment 1, we ran a second Experiment to determine whether these results could be replicated. Experiment 2 employed a very similar design to Experiment 1 with a new sample of young adults as participants.

3.1.1. Participants

Forty young adults (mean age: 23.8 years, sd: 6.3, range: 18–39) participated in Experiment 2, with 30 identifying as female, 9 identifying as male and 1 identifying as other. Inclusion and exclusion criteria were as for Experiment 1 and the experiment was approved under the same protocol by the University of South Australia's Human Research Ethics Committee. None of the participants for Experiment 2 had taken part in Experiment 1.

3.1.2. Materials

Participants again listened to 150 short passages in Experiment 2, which were adapted from those used in Experiment 1. In contrast to Experiment 1, in which only 90 of the 150 passages contained two critical two-adjective NPs, in Experiment 2, all 150 passages contained two critical NPs. This change was incorporated in order to increase the number of critical items per participant and thus improve our ability to track changes in N400 activity across the course of the experiment. In addition, we made minor modifications to some of the critical NPs from Experiment 1. As for Experiment 1, the full experimental materials are available on the study repository (see Data Availability statement).

The passages were again recorded by two male speakers of Australian English, one of which had already been one of the speakers for Experiment 1. As for Experiment 1, one of the speakers (the “canonical speaker") had a higher probability of producing canonical vs. non-canonical two-adjective orders (approximately 70%:30%), while the other (the “non-canonical speaker") had a lower probability of producing canonical vs. non-canonical orders (approximately 30%:70%). The assignment of speaker to the canonical or non-canonical role was counterbalanced across participants. In order to further accentuate the speaker-specific adjective order characteristics, presentation of the two speakers was alternated in a block-based manner in this experiment. The experiment commenced with one block of the canonical speaker, followed by two blocks of the non-canonical speaker and two further blocks of the canonical speaker.

Comprehension questions were again presented after approximately 1/3 of all passages.

3.1.3. Language models

Adjective clusters and speaker-based surprisal were calculated following the same procedure as for Experiment 1. The adjective clusters for Experiment 2 are listed in the Supplementary materials.

3.1.4. Behavioral individual differences measures

3.1.4.1. Idea density

Participants were given 10 min to produce a written text sample of approximately 300 words in response to the prompt “Describe an unexpected event in your life.” ID was calculated as in Experiment 1.

3.1.4.2. Cognitive tests

Participants completed an additional battery of cognitive tests. These included:

• The four-subtest version of the Wechsler Abbreviated Scale of Intelligence—Second Edition (WASI-II; Pearson Clinical), comprising Block design, Vocabulary, Matrix reasoning and Similarities tasks

• Three subtests from the Test of Adolescent and Adult Language-Fourth Edition (TOAL-4), namely Word opposites, Derivations and Spoken analogies

• Semantic and phonological verbal fluency tasks

• A computer-based hearing test to measure pure-tone hearing thresholds (pure-tone audiometry)

As for Experiment 1, we focus on ID and the resting state EEG-based individual differences metrics (1/f slope and Individual Alpha Frequency, IAF; see below) as our primary measures of individual differences.

3.1.5. Procedure

The two in-lab testing sessions (behavioral and EEG) for Experiment 2 were comparable to those in Experiment 1. The procedure for the EEG testing session was also identical to that for Experiment 1 with two exceptions. Firstly, participants completed a short (approximately 3.5 min) passive auditory oddball paradigm prior to the main language processing task. This task was included as part of a larger lifespan study and will not be considered here. Secondly, a subset of participants completed two (rather than one) eyes-closed resting state EEG recording sessions both before and after the experiment: one in which they were instructed to relax and one in which they were asked to try to keep their mind blank. For the purposes of calculating resting-state individual difference metrics (IAF and 1/f slope), we used the eyes-closed recordings with the “relax” instructions, as these were comparable to the eyes-closed resting-state recordings with only a single session.

3.1.6. EEG recording and preprocessing

The EEG was recorded from 64 electrodes mounted inside an elastic cap (actiCAP) using a Brain Products actiCHamp amplifier (Brain Products GmbH, Gilching, Germany). The electrooculogram (EOG) was recorded via electrodes placed at the outer canthi of both eyes as well as above and below the left eye. The EEG recording was sampled at 500 Hz and referenced to FCz.

Data preprocessing was undertaken as for Experiment 1 with the exception that, as a first step in the preprocessing procedure for Experiment 2, the data were converted to the brain imaging data structure for electroencephalography (EEG-BIDS; Pernet et al., 2019) using the MNE-BIDS Python package (Appelhoff et al., 2019).

3.1.6.1. Resting-state EEG-based individual differences measures: Individual alpha frequency and aperiodic (1/f) activity

IAF and aperiodic slope estimates were calculated as for Experiment 1. Due to slightly differing electrode configurations, there were minor differences in the electrodes used for the IAF and aperiodic activity analyzes in this experiment. The electrodes used for IAF (peak alpha frequency) estimation were: P1, Pz, P2, PO3, POz, PO4, O1, O2. The electrodes used for aperiodic slope estimation were: F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2 CP6, P7, P3, Pz, P4, P8, PO9, O1, O2, PO10, AF7, AF8, F5, F1, F2, F6, FT7, FC3, FC4, FT8, C5, C1, C2, C6, TP7, CP3, CPz, CP4, TP8, P5, P1, P2, P6, PO7, PO3, POz, PO4, PO8.

3.1.7. Data analysis

The data analysis was undertaken as for Experiment 1.

As our primary research question for Experiment 2 was whether it is possible to replicate the inter-individual difference effects observed in Experiment 1, we focus on the mixed model analyses examining 1/f slope, IAF and ID and how these modulate the effect of speaker-based surprisal across the course of the experiment.

3.2. Results

3.2.1. Individual differences measures

Distributions of the (z-transformed) individual differences measures are shown in Supplementary Figure S2.

3.2.2. EEG data

For the model including aperiodic slope, the top-level interactions involving Speaker-based Surprisal, Epoch and Slope were Prestimulus Amplitude x Speaker-based Surprisal x Epoch x Canonicity x Slope (Estimate = –0.0421, Std. Error = 0.0173, z = –2.44, p < 0.02) and Frequency x Speaker-based Surprisal x Epoch x Canonicity x Slope (Estimate = –0.0472, Std. Error = 0.0210, z = –2.24, p < 0.03).

For the model including IAF, the top-level interaction was Prestimulus Amplitude x Frequency x Speaker-based Surprisal x Epoch x IAF (Estimate = 0.0415, Std. Error = 0.0148, z = 2.80, p < 0.01); for the ID model, it was Prestimulus Amplitude x Frequency x Speaker-based Surprisal x Epoch x Canonicity x ID (Estimate = –0.0534, Std. Error = 0.0181, z = –2.95, p < 0.01). Model summaries are presented in Supplementary Tables S6S8.

The effects of interest are visualized in Figures 79. As for Experiment 1, we visualize partial effects of Speaker-based Surprisal x Epoch x Individual Differences Covariate of Interest for each model in turn in the following, adjusting for any additional moderating effects.

FIGURE 7
www.frontiersin.org

Figure 7. Effects of aperiodic (1/f) Slope on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 2. The figure visualizes partial effects as calculated using the remef package, adjusted for Prestimulus Amplitude and Canonicity. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for the individual differences variables, which are trichotomised for visualization purposes but were entered into the statistical models as a continuous predictors. Shaded areas indicate 95% confidence intervals.

FIGURE 8
www.frontiersin.org

Figure 8. Effects of IAF on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 2. The figure visualizes partial effects as calculated using the remef package, adjusted for Prestimulus Amplitude and Frequency. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for the individual differences variables, which are trichotomised for visualization purposes but were entered into the statistical models as a continuous predictors. Shaded areas indicate 95% confidence intervals.

FIGURE 9
www.frontiersin.org

Figure 9. Effects of ID on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of Experiment 2. The figure visualizes partial effects as calculated using the remef package, adjusted for Prestimulus Amplitude, Frequency and Canonicity. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for the individual differences variables, which are trichotomised for visualization purposes but were entered into the statistical models as a continuous predictors. Shaded areas indicate 95% confidence intervals.

Overall, the results of Experiment 2 replicate the effects observed in Experiment 1. Individuals with a steep 1/f slope or a low IAF show more pronounced adaptation to speaker-based, intra-experimental probabilistic information over the course of the experiment in comparison to their counterparts with a shallow 1/f slope or a high IAF. By contrast, the pattern for ID is less clear.

3.3. Combined analysis of Experiments 1 and 2

Finally, we conducted a combined analysis of Experiments 1 and 2 in order to examine whether the inter-individual differences of interest would also be observable with a more substantial sample size (n=85). To this end, we again computed the three individual-differences models involving 1/f slope, IAF and ID using the same modeling approach as before. The only exception was the addition of a main effect of Experiment in the fixed effects in order to capture any intrinsic differences in EEG activity between the two experiments (e.g., due to the use of different amplifiers).

For the combined model including aperiodic slope, the top-level interactions involving Speaker-based Surprisal, Epoch and Slope were Prestimulus Amplitude x Frequency x Speaker-based Surprisal x Epoch x Slope (Estimate = 0.0352, Std. Error = 0.0113, z = 3.11, p < 0.01) and Prestimulus Amplitude x Speaker-based Surprisal x Epoch x Canonicity x Slope (Estimate = –0.0613, Std. Error = 0.0110, z = -5.58, p < 0.0001).

For the model including IAF, the top-level interactions of interest were Prestimulus Amplitude x Frequency x Speaker-based Surprisal x Epoch x IAF (Estimate = 0.0424, Std. Error = 0.0109, z = 3.88, p < 0.001) and Frequency x Speaker-based Surprisal x Epoch x Canonicity x IAF (Estimate = –0.0432, Std. Error = 0.0118, z = –3.65, p < 0.001). For the ID model, it was Prestimulus Amplitude x Frequency x Speaker-based Surprisal x Epoch x Canonicity x ID (Estimate = –0.0239, Std. Error = 0.0120, z = -1.99, p < 0.05). Model summaries are presented in Supplementary Tables S9–S11.

The effects of interest are visualized in Figures 1012. As for the analysis of Experiments 1 and 2, we visualize partial effects of Speaker-based Surprisal x Epoch x Individual Differences Covariate of Interest for each model in turn in the following, adjusting for any additional moderating effects.

FIGURE 10
www.frontiersin.org

Figure 10. Effects of aperiodic (1/f) Slope on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of the experiment in the combined analysis of Experiments 1 and 2 (n = 85). The figure visualizes partial effects as calculated using the remef package, adjusted for Prestimulus Amplitude and Canonicity. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for the individual differences variables, which are trichotomised for visualization purposes but were entered into the statistical models as a continuous predictors. Shaded areas indicate 95% confidence intervals.

FIGURE 11
www.frontiersin.org

Figure 11. Effects of IAF on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of the experiment in the combined analysis of Experiments 1 and 2 (n = 85). The figure visualizes partial effects as calculated using the remef package, adjusted for Prestimulus Amplitude and Frequency. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for the individual differences variables, which are trichotomised for visualization purposes but were entered into the statistical models as a continuous predictors. Shaded areas indicate 95% confidence intervals.

FIGURE 12
www.frontiersin.org

Figure 12. Effects of ID on changes in the relationship between speaker-based surprisal (z-transformed) and N400 amplitude over the course of the experiment in the combined analysis of Experiments 1 and 2 (n = 85). The figure visualizes partial effects as calculated using the remef package, adjusted for Prestimulus Amplitude, Frequency and Canonicity. Note that position in the experiment (operationalised via epoch in the statistical model) is trichotomised into beginning, middle and end for visualization purposes only; epoch was included in the model as a continuous predictor. The same holds for the individual differences variables, which are trichotomised for visualization purposes but were entered into the statistical models as a continuous predictors. Shaded areas indicate 95% confidence intervals.

3.4. Discussion

The results of Experiment 2 and the combined analysis of Experiments 1 and 2 broadly support the findings of Experiment 1. The findings for 1/f slope and IAF are highly compatible across all analyses: participants with a steep 1/f slope and those with a low IAF show a more substantial model adaptation to intra-experimental probabilistic information than those with a shallow 1/f slope or a high IAF. The findings for ID are not as clear for the individual analyses of Experiments 1 and 2; however, the combined analysis shows an emerging trend for increased model adaptation over the course of the experiment by individuals with a low ID.

4. General discussion

We have reported two ERP studies designed to investigate inter-individual differences in internal model updating during naturalistic language processing. By means of a novel measure of speaker-based surprisal for adjective orders, we examined the degree to which N400 responses track context-specific probabilistic information tied to the experimental environment. This measure, “speaker-based surprisal", reflects the predictability of adjective type for the second adjective in a two-adjective sequence given the type of the first adjective for a particular speaker. Adjective type was determined in a data-driven manner using a cluster-based analysis of semantic (word-vector-based) similarity between adjectives, and speaker-based probabilities were manipulated by having one speaker utter a higher percentage of expected orders and a second speaker utter a higher percentage of unexpected orders.

4.1. Individuals incrementally adapt their predictive language models to reflect current contextual information

The current findings present compelling evidence to suggest that individuals incrementally adapt their predictive language models to reflect current contextual information. In spite of only being exposed to new, intra-experimental adjective order regularities for a relatively short period of time, participants' N400 responses had attuned to this new information by the end of the experimental session. Strikingly, this rapid attunement occurred in spite of the wealth of linguistic experience that participants bring to the laboratory from their lifelong exposure to their native language. The importance of intra-experimental information vis-à-vis prior linguistic experience is further underscored by the observation that intra-experimental surprisal effects were aligned for canonical and non-canonical adjective orders by the end of the experiment. This suggests that experiment-specific adjective order probabilities eventually took on a higher weighting in shaping individuals' predictive models than their prior language experience.

Further attesting to the extremely fine-grained nature of the model adaptation process is the observation that N400 amplitude increasingly reflected intra-experimental adjective order surprisal, as calculated incrementally (i.e., on a trial-by-trial basis) for the experimental materials to which a participant had been exposed at each point in the experiment. Moreover, the adaptation took speaker-specific information into account (“speaker-based surprisal”). Previous studies have already demonstrated an adaptation of language comprehension processes to intra-experimental probabilities (Fine et al., 2013), including speaker-specific information (e.g., Kroczek and Gunter, 2017, 2021; Brothers et al., 2019). However, the present study is, to best of our knowledge, the first to demonstrate a gradual attunement to incremental, trial-by-trial fluctuations of intra-experimental, speaker-based surprisal over the course of an experiment.

When intra-experimental probabilities do not align with prior probabilities acquired through experience outside the laboratory, the precision of an individual's global language model is reduced. Model adaptation must thus take place to accommodate speaker-based, intra-experimental contingencies. These are increasingly incorporated into the listener's internal predictive model with increasing exposure to the experimental materials. The attunement of N400 amplitudes to speaker-based surprisal over the course of the experiment thus provides converging support for the proposal that N400 effects reflect precision-weighted prediction error signals (Bornkessel-Schlesewsky and Schlesewsky, 2019). As hypothesized by Bornkessel-Schlesewsky and Schlesewsky (2019), N400 effects thereby functionally mirror MMN effects as observed in auditory oddball paradigms designed to modulate predictive model precision (Todd et al., 2011, 2013, 2014). In these studies, the identity of standard and deviant tones within an auditory oddball paradigm was periodically changed, thus requiring an adaptation of the predictive model. Todd and colleagues observed increased MMN amplitudes within tone sequences that were presented for longer periods of time, i.e., when predictive models had sufficient time to stabilize and increase in precision. However, they also found a primacy effect such that MMN effects were larger for deviations from the tone that was initially established as the standard (Todd et al., 2011). This is indicative of an advantage for the first predictive model to be established and thus attests to the integration of new information with prior knowledge during the course of predictive model adaptation. We suggest that our results show a similar pattern: the observation of speaker-based surprisal effects at the level of adjective clusters demonstrates that intra-experimental contingencies were integrated with prior linguistic knowledge, since the clusters were derived using corpus-based word vectors. Participants were thus clearly still drawing on their prior knowledge of which adjectives tend to behave similarly, while at the same time adjusting their expectations based on the occurrence of these adjectives within the experiment.

4.2. Individual differences in predictive model adaptation

The fine-grained predictive model adaptation observed in the current study differed between individuals. In this regard, we had hypothesized that individuals with steeper 1/f slopes and individuals with higher ID would show a similar adaptation pattern on account of their strong predictive language models, and that this pattern would contrast with that observed for individuals with a higher IAF. Our results provided some converging support for these assumptions but also yielded some previously unexpected insights. Firstly, for 1/f slope and IAF, the directionality of the effects was the opposite of what we had expected: our results suggest a more pronounced adaptation for individuals with a steeper 1/f slope vs. less pronounced adaptation for individuals with a higher IAF. Secondly, the results for ID were less clear in the individual analyses of Experiments 1 and 2, but the combined analysis of both experiments revealed a trend for lower-ID individuals to show more rapid model adaptation, in line with our original hypothesis.

In the following, we discuss 1/f slope, IAF and ID in turn.

4.2.1. Individuals with a steeper aperiodic (1/f) slope show more pronounced effects of model adaptation than those with a shallower aperiodic slope

Participants with a steeper aperiodic (1/f) slope showed a more substantial N400 attunement to speaker-based surprisal over the course of the experiment than their counterparts with a shallower aperiodic slope. This result supports and extends the findings by Dave et al. (2018) that individuals with a steep 1/f slope showed more pronounced prediction-related N400 effects than individuals with a shallow 1/f slope. Dave and colleagues proposed that individuals with low neural noise, as reflected in a steeper 1/f slope, show enhanced prediction (i.e., their study showed a relationship between 1/f slope and N400 effects marking successful vs. unsuccessful lexical prediction). While we had originally hypothesized that this might correlate with a reduced degree of adaptation to intra-experimental contigencies, our findings suggest that, to the contrary, enhanced prediction may in fact be related to an individual's ability to flexibly adapt their neural predictive coding infrastructure to current environmental and task conditions.2

This assumption can be linked to the notion that steeper 1/f slopes are indicative of lower levels of neural noise. It is proposed that steeper 1/f slopes in both intracranial and scalp EEG reflect more synchronous neural firing and concomitantly lower rates of aberrant firing or random background activity (for a review of the physiological mechanisms and modeling work that supports this claim, see Voytek and Knight, 2015). The higher signal-to-noise ratio associated with this more synchronous activity can be viewed as reflecting lower neural noise (Hong and Rebec, 2012)3. An increase of random neural background activity in aging (increased neural noise) goes hand in hand with increased variability and slowing of neural and behavioral responses to external stimuli (Hong and Rebec, 2012) as well as with a flattening of 1/f slope (Voytek et al., 2015). For example, Tran et al. (2020) observed that increased resting-state neural noise, as reflected in a flatter 1/f slope, in older adults correlated with increased variability of stimulus-related neurophysiological responses (peak alpha inter-trial coherence, ITC) in a visual discrimination task. In relation to predictive coding, lower neural noise possibly allows for a more dynamic and efficient adaptation of task- and context-related neural networks in accordance with current task demands, thus facilitating accurate and context-appropriate predictions.

Pertermann et al. (2019b) recently suggested that there is a relationship between neural noise as indexed by 1/f and neural gain control via the noradrenergic system. Release of noradrenaline from the brainstem locus coeruleus leads to increased excitatory and decreased inhibitory responses to a stimulus of interest, thus resulting in stronger stimulus discriminability and a more binary response function (i.e., stronger neural gain, Aston-Jones and Cohen, 2005). In their study, Pertermann et al. (2019b) observed a correlation between 1/f slope and pupil dilation—an index of noradrenergic system activation—in a go/no-go task and specifically for no-go trials requiring response inhibition.

The potential link between lower neural noise and higher neural gain suggests that individuals with a steeper aperiodic slope may be more effective in discriminating between relevant and irrelevant information for the flexible adaptation of their predictive models to the current context. This aligns with an active inference perspective on attention, according to which attention is preferentially allocated toward sensory evidence with a high precision (Parr and Friston, 2017). By optimizing the allocation of attention toward salient/task-relevant information, this could lead to a more rapid establishment of higher-precision models by individuals with a steeper 1/f slope—or, perhaps more precisely, models in which precision is appropriately weighted in light of prior evidence.

4.2.2. Stronger model adaptation for individuals with lower individual alpha frequency

Turning now to IAF, it initially appears somewhat counterintuitive that individuals with a higher IAF show less predictive model adaptation than individuals with a lower IAF. After all, higher IAF correlates with faster processing cycles (Cecere et al., 2015; Samaha and Postle, 2015) and previous findings suggest that older adults with a high IAF show a higher propensity to reanalyze ambiguous (“garden path") sentences when it becomes apparent that the reading initially adopted was incorrect (Kurthen et al., 2020). On the basis of these previous observations, we had thus hypothesized that high-IAF individuals would show a higher propensity for predictive model adaptation than low-IAF individuals. Upon closer consideration, however, the present study differed from the above-cited studies in several important respects. Firstly, in the study by Kurthen et al. (2020), reanalysis did not require an adaptation of the predictive model but rather the correction of a previous processing decision within the bounds of the current model's strategy space. By contrast, the adaptive demands of the present study required participants to learn new, intra-experimental probabilities associated with each speaker and adapt their predictive models to these new contingencies. Secondly, the time frames relevant for these adaptive learning processes were substantially longer than the perceptual windows of interest in the studies by Cecere et al. (2015) and Samaha and Postle (2015), as participants were required to learn two-adjective sequencing regularities over the course of an experimental session. Previous work on the localization of targets moving in space revealed an advantage for individuals with a lower IAF (Howard et al., 2017), with the authors suggesting that this result could be due to the longer timescales involved in the task (movement was between 2 and 4 s in length) in comparison to the transient stimuli used, for example, by Samaha and Postle (2015). In the language domain, Nalaye et al. (2022) recently found that lower-IAF individuals outperformed their higher-IAF counterparts when learning a modified miniature language based on Mandarin Chinese. Akin to the study by Howard et al. (2017), this paradigm involved learning regularities on timescales of multiple seconds. In the present study, lower-IAF individuals may have likewise been better able to adapt their predictive models to the intra-experimental probabilities that unfolded over multiple seconds (intra-stimulus) and minutes (inter-stimulus). However, this explanation remains tentative at present and requires more systematic examination in future research.

4.2.3. A more complex relationship between model adaptation and idea density

As ID measures the efficiency of linguistic encoding (Cheung and Kemper, 1992; Kemper et al., 2001b; Iacono et al., 2009; Engelman et al., 2010; Farias et al., 2012), we examined it as a proxy for the quality of an individual's language model. We thus hypothesized that individuals with lower ID and, hence, a lower quality language model, would show a faster adaptation to new linguistic information. While the results of Experiments 1 and 2 both showed a less clear pattern for ID in comparison to 1/f slope and IAF, the combined analysis of the two experiments does provide some converging evidence for the hypothesis that lower-ID individuals adapted their language models more substantially to the intra-experimental contingencies presented to them.

Low ID in young adulthood is a risk factor for cognitive decline and dementia in old age (Snowdon et al., 1996; Kemper et al., 2001a) and has been suggested to reflect “suboptimal neurocognitive development" (Kemper et al., 2001a, p.602). The notion that lower-ID individuals show a more flexible adaptation of their internal predictive models to the current environment may thus, at a first glance, appear somewhat counterintuitive. Note, however, that faster adaptation in the present study should not necessarily be considered a superior processing strategy. After all, high adaptability means that individuals adjusted expectations accrued through a lifetime of language experience to speaker-specific patterns encountered within a brief experimental session. This could, at least under certain circumstances, lead to the type of “overfitting” of internal predictive models that may be problematic for cognitive performance in older adulthood (Moran et al., 2014).

To better examine the utility of a rapid adaptation strategy, future research could consider model adaptation in different reward contexts, i.e., comparing circumstances where high model malleability is useful to those where it is detrimental to optimal performance. This could yield further insights on calibrated model adaptation, in which the strong prior evidence provided by a high-quality language model is weighed against the increasing quantity of incoming evidence which contradicts the prior model. In addition, the role of domain specificity requires further consideration: of our three individual differences measures of interest, only ID was directly related to the domain under consideration (language), while the other two can be considered to reflect more general characteristics of neural information processing. Future research will need to examine the role of such purported domain-specific vs. domain-general influences in more detail.

Such considerations also reflect a limitation of the current study, namely that possible interactions between individual differences measures were not considered. These are, in our view, outside of the scope of what is already a highly complex pattern of results in a new domain of investigation. However, if our interpretation of the present findings is correct, future studies should be able to further illuminate the mechanics of individuals' model adaptation by taking into account the interplay of the various individual differences metrics examined here.

4.3. Implications for predictive coding in language and beyond

Our results demonstrate that predictive processing during language comprehension adapts flexibly to current contextual and environmental demands, involving both intrinsic linguistic properties (adjective type) as well as communicative aspects (identity of the speaker). They thus extend previous work linking N400 responses to surprisal (e.g., Frank et al., 2015; Frank and Willems, 2017) by demonstrating that corpus-based surprisal may need to be complemented by surprisal metrics that are more closely aligned to the experimental context. To further understand the implications of our findings for predictive coding in language, future research should examine the persistence of predictive model adaptations. It appears unlikely that a single session of exposure to new grammatical or communicative regularities would lead to a permanent adaptation of linguistic models. The application of adapted models to future situations could, however, be governed by cognitive control processes such as those proposed in hierarchical models of cognitive control (e.g., Koechlin and Summerfield, 2007). Here, contextual or episodic information provides control cues to override prepotent stimulus-response mappings and instantiate new mappings for the duration of the appropriate context's or episode's presence. Within the context of the present study, speaker identity could have functioned as one such control cue—in addition to the broader contextual cue of undertaking a language processing task in a laboratory. Participants with a steeper 1/f slope and lower neural noise may be more adept at using such control cues to flexibly switch between alternate predictive models (cf. the association between 1/f neural noise and cognitive control in non-neurotypical populations such as children with ADHD; Pertermann et al., 2019a; Robertson et al., 2019; Ostlund et al., 2021).

A more comprehensive understanding of language processing in contextually rich, naturalistic settings could thus be facilitated by a closer examination of the interplay between predictive coding and cognitive control. Alternatively, cognitive control mechanisms could even be couched within a predictive coding architecture, as proposed by the Hierarchical Error Representation (HER) framework. The HER, which is able to account for a wide range of cognitive control-related findings including hierarchical aspects of cognitive control, posits that “a major function of prefrontal cortex is learning to predict likely prediction errors” (Alexander and Brown, 2018, p.2).

Such an approach could have far-reaching implications for language, including in helping to link linguistic phenomena across different timescales: from processing mechanisms at the scale of tens or hundreds of milliseconds to language change. We have previously suggested that precision-weighted prediction error signals could provide an “early warning signal” for impending language change (Bornkessel-Schlesewsky et al., 2020). Specifically, based on findings from Icelandic, we proposed that reduced N400 effects to a construction that is incompatible with the current prescriptive grammar signal lower predictive precision and, hence, a possible propensity for change. The present findings provide converging support for the very early stages of this proposed process by showing how a loss of precision for a prior linguistic model can lead to rapid model adaptation in accordance with current environmental contingencies. They further suggest that the temporal trajectories for model adaptation differ between individuals, with early adopters being characterized by lower neural noise (steeper aperiodic slope), lower Individual Alpha Frequency and, possibly, lower Idea Density.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://osf.io/32amz/.

Ethics statement

The studies involving human participants were reviewed and approved by University of South Australia's Human Research Ethics Committee. The participants provided their written informed consent to participate in this study.

Author contributions

IB-S, IS, CH, and EW prepared the experiments. IS, CH, and EW collected the data. IB-S and RK performed the data analysis. IB-S wrote the first draft of the manuscript. All authors contributed to conception and design of the study, manuscript revision, and approved the submitted version.

Funding

The research reported here was funded by an Australian Research Council Future Fellowship awarded to IB-S (FT160100437). AC acknowledges the support of the Three Springs Foundation.

Acknowledgments

The authors would like to thank John Ellett for help with preparing the experimental materials, Alin Grecu, Tim Harrison, and Casey Tonkin for recording the experimental stimuli and Nicole Vass for help with data collection.

Conflict of interest

Author PA was employed by Beacon Biosignals, Boston.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.817516/full#supplementary-material

Footnotes

1. ^For an alternative proposal, see, for example, Spratling (2008).

2. ^It is worth noting in this context that Dave et al. (2018) examined on-task 1/f activity during their sentence comprehension tasks, while we examined resting-state 1/f activity in the present study. Some recent research suggests that 1/f slope can be linked to global states of consciousness and arousal (Lendner et al., 2020), which could affect predictive model updating through improved attentional regulation and, hence, increased sensitivity to both prediction errors and contextual states. Future research will need to further examine the relationship between resting and on-task 1/f.

3. ^A complementary perspective on the physiological underpinnings of the 1/f slope is that it indexes the balance between excitatory and inhibitory activity: a flatter slope correlates with more stochastic excitatory firing, which is consistent with reduced inhibitory firing in aging (Gao et al., 2017).

References

Adams, R. A., Stephan, K. E., Brown, H. R., Frith, C. D., and Friston, K. J. (2013). The computational anatomy of psychosis. Front. Psychiatry 4, 47. doi: 10.3389/fpsyt.2013.00047

PubMed Abstract | CrossRef Full Text | Google Scholar

Alday, P. (2021). Palday/JellyMe4.jl: V0.2.6 (v0.2.6). Zenodo. doi: 10.5281/zenodo.5621582

CrossRef Full Text

Alday, P. M. (2018). Philistine (v0.1) [Source code]. Available online at: https://github.com/palday/philistine/

Alday, P. M. (2019). How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits. Psychophysiology 56, 13451. doi: 10.1111/psyp.13451

PubMed Abstract | CrossRef Full Text | Google Scholar

Alday, P. M., Schlesewsky, M., and Bornkessel-Schlesewsky, I. (2017). Electrophysiology reveals the neural dynamics of naturalistic auditory language processing: event-related potentials reflect continuous model updates. eNeuro 4, ENEURO.0311-16.2017. doi: 10.1523/ENEURO.0311-16.2017

PubMed Abstract | CrossRef Full Text | Google Scholar

Alexander, W. H., and Brown, J. W. (2018). Frontal cortex function as derived from hierarchical predictive coding. Sci. Rep. 8, 3843. doi: 10.1038/s41598-018-21407-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Appelhoff, S., Sanderson, M., Brooks, T., van Vliet, M., Quentin, R., Holdgraf, C., et al. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. J. Open Source Softw. 4, 1896. doi: 10.21105/joss.01896

CrossRef Full Text | Google Scholar

Aston-Jones, G., and Cohen, J. D. (2005). Adaptive gain and the role of the locus coeruleus-norepinephrine system in optimal performance. J. Comp. Neurol. 493, 99–110. doi: 10.1002/cne.20723

PubMed Abstract | CrossRef Full Text | Google Scholar

Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., and Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron 76, 695–711. doi: 10.1016/j.neuron.2012.10.038

PubMed Abstract | CrossRef Full Text | Google Scholar

Bates, D., Alday, P., Kleinschmidt, D., José Bayoán Santiago Calderón, P., Zhan, L., Noack, A., et al. (2021). JuliaStats/MixedModels.jl: V4.4.0. Zenodo. doi: 10.5281/zenodo.5542701

CrossRef Full Text

Bates, D., Kliegl, R., Vasishth, S., and Baayen, H. (2015). Parsimonious mixed models. arXiv:1506.04967 [stat]. doi: 10.48550/arXiv.1506.04967

CrossRef Full Text | Google Scholar

Bates, E., Devescovi, A., and Wulfeck, B. (2001). Psycholinguistics: a cross-language perspective. Annu. Rev. Psychol. 52, 369–396. doi: 10.1146/annurev.psych.52.1.369

PubMed Abstract | CrossRef Full Text | Google Scholar

Bates, E., McNew, S., MacWhinney, B., Devescovi, A., and Smith, S. (1982). Functional constraints on sentence processing: a cross-linguistic study. Cognition 11, 245–299. doi: 10.1016/0010-0277(82)90017-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Bezanson, J., Edelman, A., Karpinski, S., and Shah, V. B. (2017). Julia: a fresh approach to numerical computing. SIAM Rev. 59, 65–98. doi: 10.1137/141000671

CrossRef Full Text | Google Scholar

Bolker, B., and Robinson, D. (2021). broom.mixed: Tidying Methods for Mixed Models. R package version 1019 0.2.7.

Google Scholar

Bornkessel, I. D., Fiebach, C. J., Friederici, A. D., and Schlesewsky, M. (2004). "Capacity" reconsidered: interindividual differences in language comprehension and individual alpha frequency. Exp. Psychol. 51, 279–289. doi: 10.1027/1618-3169.51.4.279

PubMed Abstract | CrossRef Full Text | Google Scholar

Bornkessel-Schlesewsky, I., Philipp, M., Alday, P. M., Kretzschmar, F., Grewe, T., Gumpert, M., et al. (2015a). Age-related changes in predictive capacity versus internal model adaptability: electrophysiological evidence that individual differences outweigh effects of age. Front. Aging Neurosci. 7, 217. doi: 10.3389/fnagi.2015.00217

PubMed Abstract | CrossRef Full Text | Google Scholar

Bornkessel-Schlesewsky, I., Roehm, D., Mailhammer, R., and Schlesewsky, M. (2020). Language processing as a precursor to language change: evidence from icelandic. Front. Psychol. 10, 3013. doi: 10.3389/fpsyg.2019.03013

PubMed Abstract | CrossRef Full Text | Google Scholar

Bornkessel-Schlesewsky, I., and Schlesewsky, M. (2013). Reconciling time, space and function: a new dorsal-ventral stream model of sentence comprehension. Brain Lang. 125, 60–76. doi: 10.1016/j.bandl.2013.01.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Bornkessel-Schlesewsky, I., and Schlesewsky, M. (2019). Towards a neurobiologically plausible model of language-related, negative event-related potentials. Front. Psychol. 10, 298. doi: 10.3389/fpsyg.2019.00298

PubMed Abstract | CrossRef Full Text | Google Scholar

Bornkessel-Schlesewsky, I., and Schlesewsky, M. (2020). “Cross-linguistic neuroscience of language,” in The Cognitive Neurosciences, 6th Edn, eds M. S. Gazzaniga, G. R. Mangun, and D. Poeppel (Cambridge, MA: MIT Press), 841–848.

Bornkessel-Schlesewsky, I., Schlesewsky, M., Small, S. L., and Rauschecker, J. P. (2015b). Neurobiological roots of language in primate audition: common computational properties. Trends Cogn Sci. 19, 142–150. doi: 10.1016/j.tics.2014.12.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Brehm, L., and Alday, P. M. (2022). Contrast coding choices in a decade of mixed models. J. Mem. Lang. 125, 104334. doi: 10.1016/j.jml.2022.104334

CrossRef Full Text | Google Scholar

Brothers, T., Dave, S., Hoversten, L. J., Traxler, M. J., and Swaab, T. Y. (2019). Flexible predictions during listening comprehension: speaker reliability affects anticipatory processes. Neuropsychologia 135, 107225. doi: 10.1016/j.neuropsychologia.2019.107225

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, C., Snodgrass, T., Kemper, S. J., Herman, R., and a Covington, M. (2008). Automatic measurement of propositional idea density from part-of-speech tagging. Behav. Res. Methods 40, 540–545. doi: 10.3758/BRM.40.2.540

PubMed Abstract | CrossRef Full Text | Google Scholar

Buzsáki, G., Anastassiou, C. A., and Koch, C. (2012). The origin of extracellular fields and currents – EEG, ECoG, LFP and spikes. Nat. Rev. Neurosci. 13, 407–420. doi: 10.1038/nrn3241

PubMed Abstract | CrossRef Full Text | Google Scholar

Buzsáki, G., and Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science 304, 1926–1929. doi: 10.1126/science.1099745

PubMed Abstract | CrossRef Full Text | Google Scholar

Cecere, R., Rees, G., and Romei, V. (2015). Individual differences in alpha frequency drive crossmodal illusory perception. Curr. Biol. 25, 231–235. doi: 10.1016/j.cub.2014.11.034

PubMed Abstract | CrossRef Full Text | Google Scholar

Chater, N., and Manning, C. D. (2006). Probabilistic models of language processing and acquisition. Trends Cogn. Sci. 10, 335–344. doi: 10.1016/j.tics.2006.05.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Cheung, H., and Kemper, S. (1992). Competing complexity metrics and adults' production of complex sentences. Appl. Psycholinguist. 13, 53. doi: 10.1017/S0142716400005427

CrossRef Full Text | Google Scholar

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204. doi: 10.1017/S0140525X12000477

PubMed Abstract | CrossRef Full Text | Google Scholar

Clark, A. (2015). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. New York, NY: Oxford University Press.

Google Scholar

Corcoran, A. W., Alday, P. M., Schlesewsky, M., and Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology 55, e13064. doi: 10.1111/psyp.13064

PubMed Abstract | CrossRef Full Text | Google Scholar

Cross, Z. R., Corcoran, A. W., Schlesewsky, M., Kohler, M. J., and Bornkessel-Schlesewsky, I. (2022). Oscillatory and aperiodic neural activity jointly predict language learning. J. Cogn. Neurosci. 24, 1–20. doi: 10.1162/jocn_a_01878

PubMed Abstract | CrossRef Full Text | Google Scholar

Daneman, M., and Carpenter, P. A. (1980). Individual differences in working memory and reading. J. Verbal Learn. Verbal Behav. 19, 450–466. doi: 10.1016/S0022-5371(80)90312-6

CrossRef Full Text | Google Scholar

Dave, S., Brothers, T., and Swaab, T. (2018). 1/f neural noise and electrophysiological indices of contextual prediction in aging. Brain Res. 1691, 34–43. doi: 10.1016/j.brainres.2018.04.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Dikker, S., and Pylkkänen, L. (2011). Before the N400: Effects of lexical-semantic violations in visual cortex. Brain Lang. 118, 23–28. doi: 10.1016/j.bandl.2011.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Dikker, S., Rabagliati, H., Farmer, T. A., and Pylkkänen, L. (2010). Early occipital sensitivity to syntactic category is based on form typicality. Psychol. Sci. 21, 629–634. doi: 10.1177/0956797610367751

PubMed Abstract | CrossRef Full Text | Google Scholar

Donoghue, T., Haller, M., Peterson, E. J., Varma, P., Sebastian, P., Gao, R., et al. (2020). Parameterizing neural power spectra into periodic and aperiodic components. Nat. Neurosci. 23, 1655–1665. doi: 10.1038/s41593-020-00744-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Engelman, M., Agree, E. M. A. Meoni, L., and Klag, M. J. (2010). Propositional density and cognitive function in later life: Findings from the precursors study. J. Gerontol. B Psychol. Sci. Soc. Sci. 65, 706–711. doi: 10.1093/geronb/gbq064

PubMed Abstract | CrossRef Full Text | Google Scholar

Farias, S. T., Chand, V., Bonnici, L., Baynes, K., Harvey, D., Mungas, D., et al. (2012). Idea density measured in late life predicts subsequent cognitive trajectories: implications for the measurement of cognitive reserve. J. Gerontol. B Psychol. Sci. Soc. Sci. 67, 677–686. doi: 10.1093/geronb/gbr162

PubMed Abstract | CrossRef Full Text | Google Scholar

Federmeier, K. D., and Kutas, M. (1999). A rose by any other name: long-term memory structure and sentence processing. J. Mem. Lang. 41, 469–495. doi: 10.1006/jmla.1999.2660

CrossRef Full Text | Google Scholar

Feldman, H., and Friston, K. J. (2010). Attention, uncertainty, and free-energy. Front. Hum. Neurosci. 4, 215. doi: 10.3389/fnhum.2010.00215

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferguson, A., Spencer, E., Craig, H., and Colyvas, K. (2014). Propositional idea density in women's written language over the lifespan: computerized analysis. Cortex 55, 107–121. doi: 10.1016/j.cortex.2013.05.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Fine, A. B., Jaeger, T. F., Farmer, T. A., and Qian, T. (2013). Rapid expectation adaptation during syntactic comprehension. PLoS ONE 8, e77661. doi: 10.1371/journal.pone.0077661

PubMed Abstract | CrossRef Full Text | Google Scholar

Fletcher, P. C., and Frith, C. D. (2009). Perceiving is believing: A Bayesian approach to explaining the positive symptoms of schizophrenia. Nat. Rev. Neurosci. 10, 48–58. doi: 10.1038/nrn2536

PubMed Abstract | CrossRef Full Text | Google Scholar

Frank, S. L., Otten, L. J., Galli, G., and Vigliocco, G. (2015). The ERP response to the amount of information conveyed by words in sentences. Brain Lang. 140, 1–11. doi: 10.1016/j.bandl.2014.10.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Frank, S. L., and Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Lang. Cogn. Neurosci. 32, 1192–1203. doi: 10.1080/23273798.2017.1323109

CrossRef Full Text | Google Scholar

Freeman, W. J., and Zhai, J. (2009). Simulated power spectral density (PSD) of background electrocorticogram (ECoG). Cogn. Neurodyn. 3, 97–103. doi: 10.1007/s11571-008-9064-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Fries, P. (2005). A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn. Sci. 9, 474–480. doi: 10.1016/j.tics.2005.08.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K. J. (2005). A theory of cortical responses. Philos. Trans. R. Soc. B Biol. Sci. 360, 815–836. doi: 10.1098/rstb.2005.1622

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K. J. (2009). The free-energy principle: a rough guide to the brain? Trends Cogn. Sci. 13, 293–301.

PubMed Abstract | Google Scholar

Friston, K. J., Sajid, N., Quiroga-Martinez, D. R., Parr, T., Price, C. J., and Holmes, E. (2021). Active listening. Hear. Res. 399, 107998. doi: 10.1016/j.heares.2020.107998

PubMed Abstract | CrossRef Full Text | Google Scholar

Frith, C. (2007). Making up the Mind: How the Brain Creates Our Mental World. Oxford: Blackwell.

Google Scholar

Gao, R., Peterson, E. J., and Voytek, B. (2017). Inferring synaptic excitation/inhibition balance from field potentials. Neuroimage 158, 70–78. doi: 10.1016/j.neuroimage.2017.06.078

PubMed Abstract | CrossRef Full Text | Google Scholar

Garrido, M. I., Kilner, J. M., Stephan, K. E., and Friston, K. J. (2009). The mismatch negativity: a review of underlying mechanisms. Clin. Neurophysiol. 120, 453–463. doi: 10.1016/j.clinph.2008.11.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Gasser, T., Bächer, P., and Steinberg, H. (1985). Test-retest reliability of spectral parameters of the EEG. Electroencephalogr. Clin. Neurophysiol. 60, 312–319. doi: 10.1016/0013-4694(85)90005-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., et al. (2013). MEG and EEG data analysis with MNE-Python. Front. Neurosci. 7, 267. doi: 10.3389/fnins.2013.00267

PubMed Abstract | CrossRef Full Text | Google Scholar

Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., et al. (2014). MNE software for processing MEG and EEG data. Neuroimage 86, 446–460. doi: 10.1016/j.neuroimage.2013.10.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Grandy, T. H., Werkle-Bergner, M., Chicherio, C., Lövdén, M., Schmiedek, F., and Lindenberger, U. (2013a). Individual alpha peak frequency is related to latent factors of general cognitive abilities. Neuroimage 79, 10–18. doi: 10.1016/j.neuroimage.2013.04.059

PubMed Abstract | CrossRef Full Text | Google Scholar

Grandy, T. H., Werkle-Bergner, M., Chicherio, C., and Schmiedek, F. (2013b). Peak individual alpha frequency qualifies as a stable neurophysiological trait marker in healthy younger and older adults. Psychophysiology 50, 570–582. doi: 10.1111/psyp.12043

PubMed Abstract | CrossRef Full Text | Google Scholar

Hale, J. (2006). Uncertainty about the rest of the sentence. Cogn. Sci. 30, 643–672. doi: 10.1207/s15516709cog0000_64

PubMed Abstract | CrossRef Full Text | Google Scholar

Hale, J. (2016). Information-theoretical complexity metrics: Information-theoretical complexity metrics. Lang. Linguist. Compass 10, 397–412. doi: 10.1111/lnc3.12196

CrossRef Full Text | Google Scholar

He, B. J. (2014). Scale-free brain activity: past, present, and future. Trends Cogn. Sci. 18, 480–487. doi: 10.1016/j.tics.2014.04.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Hedden, T., and Gabrieli, J. D. E. (2004). Insights into the ageing mind: a view from cognitive neuroscience. Nat. Rev. Neurosci. 5, 87–96. doi: 10.1038/nrn1323

PubMed Abstract | CrossRef Full Text | Google Scholar

Hester, J., and Wickham, H. (2021). vroom: Read and Write Rectangular Text Data Quickly. R package version 1.5.3.

Hohenstein, S., and Kliegl, R. (2021). remef: Remove Partial Effects. R package version 1.0.7.

Hohwy, J. (2013). The Predictive Mind. New York, NY: Oxford University Press.

Google Scholar

Hong, S. L., and Rebec, G. V. (2012). A new perspective on behavioral inconsistency and neural noise in aging: Compensatory speeding of neural communication. Front. Aging Neurosci. 4, 27. doi: 10.3389/fnagi.2012.00027

PubMed Abstract | CrossRef Full Text | Google Scholar

Howard, C. J., Arnold, C. P., and Belmonte, M. K. (2017). Slower resting alpha frequency is associated with superior localisation of moving targets. Brain Cogn. 117, 97–107. doi: 10.1016/j.bandc.2017.06.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Iacono, D., Markesbery, W. R., Gross, M., Pletnikova, O., Rudow, G., Zandi, P., et al. (2009). The nun study. Clinically silent AD, neuronal hypertrophy, and linguistic skills in early life. Neurology 73, 665–673. doi: 10.1212/WNL.0b013e3181b01077

PubMed Abstract | CrossRef Full Text | Google Scholar

Jurafsky, D. (2003). “Probabilistic modeling in psycholinguistics: linguistic comprehension and production,” in Probabilistic Linguistics, volume 21, eds R. Bod, J. Hay, and S. Jannedy (Cambridge, MA; London: MIT Press), 39–95.

PubMed Abstract | Google Scholar

Kemmerer, D., Weber-Fox, C., Price, K., Zdanczyk, C., and Way, H. (2007). Big brown dog or brown big dog? An electrophysiological study of semantic constraints on prenominal adjective order. Brain Lang. 100, 238–256. doi: 10.1016/j.bandl.2005.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Kemper, S., Greiner, L. H., Marquis, J. G., Prenovost, K., and Mitzner, T. L. (2001a). Language decline across the life span: Findings from the nun study. Psychol. Aging 16, 227–239. doi: 10.1037/0882-7974.16.2.227

PubMed Abstract | CrossRef Full Text | Google Scholar

Kemper, S., Thompson, M., and Marquis, J. G. (2001b). Longitudinal change in language production: effects of aging and dementia on grammatical complexity and propositional content. Psychol. Aging 16, 600–614. doi: 10.1037/0882-7974.16.4.600

PubMed Abstract | CrossRef Full Text | Google Scholar

Kintsch, W., and Keenan, J. (1973). Reading rate and retention as a function of the number of propositions in the base structure of sentences. Cogn. Psychol. 5, 257–274. doi: 10.1016/0010-0285(73)90036-4

CrossRef Full Text | Google Scholar

Klimesch, W. (1999). EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis. Brain Res. Rev. 29, 169–195. doi: 10.1016/S0165-0173(98)00056-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Knill, D. C., and Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends Neurosci. 27, 712–719. doi: 10.1016/j.tins.2004.10.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Koechlin, E., and Summerfield, C. (2007). An information theoretical approach to prefrontal executive function. Trends Cogn. Sci. 11, 229–235. doi: 10.1016/j.tics.2007.04.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Kondacs, A., and Szab,ó, M. (1999). Long-term intra-individual variability of the background EEG in normals. Clin. Neurophysiol. 110, 1708–1716. doi: 10.1016/S1388-2457(99)00122-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Köpruner, V., Pfurtscheller, G., and Auer, L. (1984). “Quantitative EEG in normals and in patients with cerebral ischemia,” in Brain Ischemia: Quantitative EEG and Imaging Techniques, volume 62 of Progress in Brain Research, eds G. Pfurtscheller, E. Jonkman, and F. Lopes da Silva (Amsterdam: Elsevier), 29–50.

PubMed Abstract | Google Scholar

Kroczek, L. O., and Gunter, T. C. (2021). The time course of speaker-specific language processing. Cortex 141, 311–321. doi: 10.1016/j.cortex.2021.04.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Kroczek, L. O. H., and Gunter, T. C. (2017). Communicative predictions can overrule linguistic priors. Sci. Rep. 7, 17581. doi: 10.1038/s41598-017-17907-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Kuhn, M., Jackson, S., and Cimentada, J. (2020). corrr: Correlations in R. R package version 0.4.3

Kuhn, M., and Wickham, H. (2020). Tidymodels: A Collection Of Packages For Modeling And Machine Learning Using Tidyverse Principles. Query: Provide the city and publisher name for “Kuhn and Wickham, 2020.”

Kuperberg, G. R. (2016). Separate streams or probabilistic inference? What the N400 can tell us about the comprehension of events. Lang. Cogn. Neurosci. 31, 602–616. doi: 10.1080/23273798.2015.1130233

PubMed Abstract | CrossRef Full Text | Google Scholar

Kuperberg, G. R., and Jaeger, T. F. (2016). What do we mean by prediction in language comprehension? Lang. Cogn. Neurosci. 31, 32–59. doi: 10.1080/23273798.2015.1102299

PubMed Abstract | CrossRef Full Text | Google Scholar

Kurthen, I., Meyer, M., Schlesewsky, M., and Bornkessel-Schlesewsky, I. (2020). Individual differences in peripheral hearing and cognition reveal sentence processing differences in healthy older adults. Front. Neurosci. 14, 573513. doi: 10.3389/fnins.2020.573513

PubMed Abstract | CrossRef Full Text | Google Scholar

Laszlo, S., and Federmeier, K. D. (2009). A beautiful day in the neighborhood: an event-related potential study of lexical relationships and prediction in context. J. Mem. Lang. 61, 326–338. doi: 10.1016/j.jml.2009.06.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Laszlo, S., and Federmeier, K. D. (2011). The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects. Psychophysiology 48, 176–186. doi: 10.1111/j.1469-8986.2010.01058.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Lendner, J. D., Helfrich, R. F., Mander, B. A., Romundstad, L., Lin, J. J., Walker, M. P., et al. (2020). An electrophysiological marker of arousal level in humans. Elife 9, e55092. doi: 10.7554/eLife.55092

PubMed Abstract | CrossRef Full Text | Google Scholar

Levy, R. (2008). Expectation-based syntactic comprehension. Cognition 106, 1126–1177. doi: 10.1016/j.cognition.2007.05.006

PubMed Abstract | CrossRef Full Text | Google Scholar

MacWhinney, B., Bates, E., and Kliegl, R. (1984). Cue validity and sentence interpretation in English, German, and Italian. J. Verbal Learn. Verbal Behav. 23, 127–150. doi: 10.1016/S0022-5371(84)90093-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Matuschek, H., and Kliegl, R. (2018). On the ambiguity of interaction and nonlinear main effects in a regime of dependent covariates. Behav. Res. Methods 50, 1882–1894. doi: 10.3758/s13428-017-0956-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., and Bates, D. (2017). Balancing Type I error and power in linear mixed models. J. Mem. Lang. 94, 305–315. doi: 10.1016/j.jml.2017.01.001

CrossRef Full Text | Google Scholar

McWhite, C. D., and Wilke, C. O. (2021). colorblindr: Simulate colorblindness in R figures. R package version 0.1.0.

Moran, R. J., Symmonds, M., Dolan, R. J., and Friston, K. J. (2014). The brain ages optimally to model its environment: evidence from sensory learning over the adult lifespan. PLoS Comput. Biol. 10, e1003422. doi: 10.1371/journal.pcbi.1003422

PubMed Abstract | CrossRef Full Text | Google Scholar

Müller, K. (2020). here: A Simpler Way to Find Your Files. R package version 1.0.1.

Nalaye, H., Cross, Z. R., Schlesewsky, M., and Bornkessel-Schlesewsky, I. (2022). Electrophysiological indices of individual differences in adult language learning. bioRxiv. doi: 10.1101/2022.06.07.495229

CrossRef Full Text | Google Scholar

Ociepka, M., Kałamała, P., and Chuderski, A. (2022). High individual alpha frequency brains run fast, but it does not make them smart. Intelligence 92, 101644. doi: 10.1016/j.intell.2022.101644

PubMed Abstract | CrossRef Full Text | Google Scholar

Oldfield, R. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 9, 97–113. doi: 10.1016/0028-3932(71)90067-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Ostlund, B. D., Alperin, B. R., Drew, T., and Karalunas, S. L. (2021). Behavioral and cognitive correlates of the aperiodic (1/f-like) exponent of the EEG power spectrum in adolescents with and without ADHD. Dev. Cogn. Neurosci. 48, 100931. doi: 10.1016/j.dcn.2021.100931

PubMed Abstract | CrossRef Full Text | Google Scholar

Ouyang, G., Hildebrandt, A., Schmitz, F., and Herrmann, C. S. (2020). Decomposing alpha and 1/f brain activities reveals their differential associations with cognitive processing speed. Neuroimage 205, 116304. doi: 10.1016/j.neuroimage.2019.116304

PubMed Abstract | CrossRef Full Text | Google Scholar

Parr, T., and Friston, K. J. (2017). Working memory, attention, and salience in active inference. Sci. Rep. 7, 14678. doi: 10.1038/s41598-017-15249-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Pedersen, T. L. (2020). patchwork: The Composer of Plots. R package version 1.1.1.

Pereira, F., Lou, B., Pritchett, B., Ritter, S., Gershman, S. J., Kanwisher, N., et al. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nat. Commun. 9, 963. doi: 10.1038/s41467-018-03068-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., et al. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data 6, 103. doi: 10.1038/s41597-019-0104-8

PubMed Abstract | CrossRef Full Text

Pertermann, M., Bluschke, A., Roessner, V., and Beste, C. (2019a). The modulation of neural noise underlies the effectiveness of methylphenidate treatment in attention-deficit/hyperactivity disorder. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 4, 743–750. doi: 10.1016/j.bpsc.2019.03.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Pertermann, M., Mückschel, M., Adelhöfer, N., Ziemssen, T., and Beste, C. (2019b). On the interrelation of 1/ f neural noise and norepinephrine system activity during motor response inhibition. J. Neurophysiol. 121, 1633–1643. doi: 10.1152/jn.00701.2018

PubMed Abstract | CrossRef Full Text | Google Scholar

Pickering, M. J., and Garrod, S. (2007). Do people use language production to make predictions during comprehension? Trends Cogn. Sci. 11, 105–110. doi: 10.1016/j.tics.2006.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Pickering, M. J., and Garrod, S. (2013). An integrated theory of language production and comprehension. Behav. Brain Sci. 36, 329–347. doi: 10.1017/S0140525X12001495

PubMed Abstract | CrossRef Full Text | Google Scholar

Poeppel, D., Idsardi, W. J., and van Wassenhove, V. (2008). Speech perception at the interface of neurobiology and linguistics. Philos. Trans. R. Soc. B Biol. Sci. 363, 1071–1086. doi: 10.1098/rstb.2007.2160

PubMed Abstract | CrossRef Full Text | Google Scholar

Posthuma, D., Neale, M. C., Boomsma, D. I., and De Geus, E. J. C. (2001). Are smarter brains running faster? Heritability of alpha peak frequency, IQ, and their interrelation. Behav. Genet. 31, 567–579. doi: 10.1023/A:1013345411774

PubMed Abstract | CrossRef Full Text | Google Scholar

R Core Team (2021). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.

Google Scholar

Rao, R. P., and Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87. doi: 10.1038/4580

PubMed Abstract | CrossRef Full Text | Google Scholar

Rauschecker, J. P., and Scott, S. K. (2009). Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nat. Neurosci. 12, 718–724. doi: 10.1038/nn.2331

PubMed Abstract | CrossRef Full Text | Google Scholar

Robertson, M. M., Furlong, S., Voytek, B., Donoghue, T., Boettiger, C. A., and Sheridan, M. A. (2019). EEG power spectral slope differs by ADHD status and stimulant medication exposure in early childhood. J. Neurophysiol. 122, 2427–2437. doi: 10.1152/jn.00388.2019

PubMed Abstract | CrossRef Full Text | Google Scholar

Robinson, D. (2021). widyr: Widen, Process, then Re-Tidy Data. R package version 0.1.4

Salthouse, T. A. (2011). Neuroanatomical substrates of age-related cognitive decline. Psychol. Bull. 137, 753–784. doi: 10.1037/a0023262

PubMed Abstract | CrossRef Full Text | Google Scholar

Samaha, J., and Postle, B. R. (2015). The speed of alpha-band oscillations predicts the temporal resolution of visual perception. Curr. Biol. 25, 2985–2990. doi: 10.1016/j.cub.2015.10.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Sanborn, A. N., and Chater, N. (2016). bayesian brains without probabilities. Trends Cogn. Sci. 20, 883–893. doi: 10.1016/j.tics.2016.10.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Schad, D. J., Vasishth, S., Hohenstein, S., and Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: a tutorial. J. Mem. Lang. 110, 104038. doi: 10.1016/j.jml.2019.104038

CrossRef Full Text | Google Scholar

Silge J. and Robinson, D. (2016). tidytext: Text mining and analysis using tidy data principles in R. J. Open Source Softw. 1/, 37. doi: 10.21105/joss.00037

CrossRef Full Text

Skipper, J. I., van Wassenhove, V., Nusbaum, H. C., and Small, S. L. (2007). Hearing lips and seeing voices: how cortical areas supporting speech production mediate audiovisual speech perception. Cereb. Cortex 17, 2387–2399. doi: 10.1093/cercor/bhl147

PubMed Abstract | CrossRef Full Text | Google Scholar

Smit, C. M., Wright, M. J., Hansell, N. K., Geffen, G. M., and Martin, N. G. (2006). Genetic variation of individual alpha frequency (IAF) and alpha power in a large adolescent twin sample. Int. J. Psychophysiol. 61, 235–243. doi: 10.1016/j.ijpsycho.2005.10.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Snowdon, D. A., Kemper, S. J., Mortimer, J. A., Greiner, L. H., Wekstein, D. R., and Markesbery, W. R. (1996). Linguistic ability in early life and cognitive function and Alzheimer's disease in late life. Findings from the Nun Study. JAMA 275, 528–532. doi: 10.1001/jama.1996.03530310034029

PubMed Abstract | CrossRef Full Text | Google Scholar

Spencer, E., Ferguson, A., Craig, H., Colyvas, K., Hankey, G. J., and Flicker, L. (2015). Propositional idea density in older men's written language: findings from the HIMS study using computerised analysis. Clin. Linguist. Phonet. 29, 85–101. doi: 10.3109/02699206.2014.956263

PubMed Abstract | CrossRef Full Text | Google Scholar

Spratling, M. (2008). Predictive coding as a model of biased competition in visual attention. Vis. Res. 48, 1391–1408. doi: 10.1016/j.visres.2008.03.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Surwillo, W. W. (1961). Frequency of the alpha rhythm, reaction time and age. Nature 191, 823–824. doi: 10.1038/191823a0

CrossRef Full Text | Google Scholar

Surwillo, W. W. (1963). The relation of simple response time to brain-wave frequency and the effects of age. Electroencephalogr. Clin. Neurophysiol. 15, 105–114. doi: 10.1016/0013-4694(63)90043-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Todd, J., Heathcote, A., Mullens, D., Whitson, L. R., Provost, A., and Winkler, I. (2014). What controls gain in gain control? Mismatch negativity (MMN), priors and system biases. Brain Topography 27, 578–589. doi: 10.1007/s10548-013-0344-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Todd, J., Provost, A., and Cooper, G. (2011). Lasting first impressions: a conservative bias in automatic filters of the acoustic environment. Neuropsychologia 49, 3399–3405. doi: 10.1016/j.neuropsychologia.2011.08.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Todd, J., Provost, A., Whitson, L. R., Cooper, G., and Heathcote, A. (2013). Not so primitive: context-sensitive meta-learning about unattended sound sequences. J. Neurophysiol. 109, 99–105. doi: 10.1152/jn.00581.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Tran, T. T., Rolle, C. E., Gazzaley, A., and Voytek, B. (2020). Linked sources of neural noise contribute to age-related cognitive decline. J. Cogn. Neurosci. 32, 1813–1822. doi: 10.1162/jocn_a_01584

PubMed Abstract | CrossRef Full Text | Google Scholar

Vallat, R., and Walker, M. P. (2021). A universal, open-source, high-performance tool for automated sleep staging. Preprint BioRxiv. doi: 10.1101/2021.05.28.446165

CrossRef Full Text | Google Scholar

van Paridon, J., and Thompson, B. (2021). Subs2vec: Word embeddings from subtitles in 55 languages. Behav. Res. Methods 53, 629–655. doi: 10.3758/s13428-020-01406-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Petten, C., and Luka, B. (2012). Prediction during language comprehension: benefits, costs, and ERP components. Int. J. Psychophysiol. 83, 176–190. doi: 10.1016/j.ijpsycho.2011.09.015

PubMed Abstract | CrossRef Full Text | Google Scholar

VanRullen, R. (2016). Perceptual cycles. Trends Cogn. Sci. 20, 723–735. doi: 10.1016/j.tics.2016.07.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Vilares, I., and Kording, K. (2011). Bayesian models: The structure of the world, uncertainty, behavior, and the brain. Ann. N. Y. Acad. Sci. 1224, 22–39. doi: 10.1111/j.1749-6632.2011.05965.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Voytek, B., and Knight, R. T. (2015). Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease. Biol. Psychiatry 77, 1089–1097. doi: 10.1016/j.biopsych.2015.04.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Voytek, B., Kramer, M. A., Case, J., Lepage, K. Q., Tempesta, Z. R., Knight, R. T., et al. (2015). Age-related changes in 1/f neural electrophysiological noise. J. Neurosci. 35, 13257–13265. doi: 10.1523/JNEUROSCI.2332-14.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, L., Bastiaansen, M., Yang, Y., and Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: how focus and pitch accent determine the size of the N400 effect. Neuropsychologia 49, 813–820. doi: 10.1016/j.neuropsychologia.2010.12.035

PubMed Abstract | CrossRef Full Text | Google Scholar

Wen, H., and Liu, Z. (2016). Separating fractal and oscillatory components in the power spectrum of neurophysiological signal. Brain Topogr. 29, 13–26. doi: 10.1007/s10548-015-0448-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Wickham, H. (2016). ggplot2: Elegant Graphics for Data Analysis. New York, NY: Springer. https://ggplot2.tidyverse.org

Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., et al. (2019). Welcome to the tidyverse. J. Open Source Softw. 4, 1686. doi: 10.21105/joss.01686

CrossRef Full Text

Wickham, H., Chang, W., Henry, L., Pedersen, T. L., Takahashi, K., Wilke, C., et al. (2021). ggplot2: Create Elegant Data Visualisations Using the Grammar of Graphics [Manual]. Available online at: https://CRAN.R-project.org/package=ggplot2

Wilke, C. O. (2021). cowplot: Streamlined Plot Theme and Plot Annotations for ggplot2 [Manual]. Available online at: https://wilkelab.org/cowplot/

Zhu, H. (2021). kableExtra: Construct Complex Table with kable and Pipe Syntax [Manual]. Available online at: https://CRAN.R-project.org/package=kableExtra

Keywords: language comprehension, predictive coding, precision, EEG, N400, aperiodic slope, idea density, individual alpha frequency (IAF)

Citation: Bornkessel-Schlesewsky I, Sharrad I, Howlett CA, Alday PM, Corcoran AW, Bellan V, Wilkinson E, Kliegl R, Lewis RL, Small SL and Schlesewsky M (2022) Rapid adaptation of predictive models during language comprehension: Aperiodic EEG slope, individual alpha frequency and idea density modulate individual differences in real-time model updating. Front. Psychol. 13:817516. doi: 10.3389/fpsyg.2022.817516

Received: 18 November 2021; Accepted: 22 July 2022;
Published: 26 August 2022.

Edited by:

Chia-Ying Lee, Academia Sinica, Taiwan

Reviewed by:

Chun-Hsien Hsu, National Central University, Taiwan
Matthew Euler, The University of Utah, United States

Copyright © 2022 Bornkessel-Schlesewsky, Sharrad, Howlett, Alday, Corcoran, Bellan, Wilkinson, Kliegl, Lewis, Small and Schlesewsky. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ina Bornkessel-Schlesewsky, ina.bornkessel-schlesewsky@unisa.edu.au

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.