For lexical access in spoken word comprehension, the listener must extract information from the auditory input signal and match it to stored mental representations. To explain this process, it is crucial to characterize what is mentally represented, what information is extracted from the auditory signal, and how the extracted information is compared to stored representations.
An established axiom is that words consist of phonemes, and that phonemes are comprised of distinctive features. However, models differ as to whether the phonemes are encoded by abstract, symbolic features or whether they encode probabilistic aspects of the corresponding physical signal.
Within symbolic approaches, additional issues are debated: whether the features encode binary values versus absolute (unary, or privative) feature values. For example, a binary feature model would represent a vocal cord vibration as [±voice], illustrated by /d/ being differentiated from /t/ by being specified as [+voice] vs. [-voice], respectively. In contrast, unary or monovalent feature models entail that only the presence of a feature counts, while its opposite does not exist, illustrated by /d/ having the feature [voice] while /t/ not having any particular feature referring to voicing (or vice versa depending on language).
Within theoretical phonology, the so-called underspecification has been motivated by asymmetries in assimilation rules: a phoneme that is underspecified for a particular feature would more readily absorb values from adjacent segments. With respect to lexical access, underspecification predicts that words with underspecified segments will be more ambiguous with respect to the acoustic input signal than words with fully specified representations.
The field of cognitive neuroscience is an additional source of evidence for examining these issues, as first demonstrated in the work of Aditi Lahiri (one of the Topic Editors of this Research Topic) and her colleagues. Using the Mismatch Negativity (MMN) component, an automatic change detection response of the brain, they observed asymmetric amplitude modulations of the MMN response to phoneme contrasts that could be explained by the underspecification theory.
Their predictions have been further supported by several subsequent studies of phoneme contrasts across languages. However, questions remain about the true nature of MMN asymmetries in phonology.
We are inviting authors to submit research articles to the present Research Topic, addressing but not limited to the following themes:
- Are asymmetric MMNs caused by non-linguistic rather than linguistic factors?
- Are asymmetric MMN effects driven by basic neuronal response differences between contrasting pairs of sounds as opposed to abstract grammatical differences?
- Do asymmetries depend on how the MMN is calculated (identity MMN, vs. other paradigms)?
- Is the asymmetry related to differences in recovery from refractoriness of the N1 component?
- Could asymmetries arise from differences in the standard response as a result of habituation?
- Could asymmetries arise from differences in phoneme or word usage frequency?
- Could asymmetries arise from differences in phonotactic probability?
- What is the interpretation of differences in MMN latency vs. MMN amplitude?
- Could MMN asymmetries arise from spatial differences in the MMN to each phoneme?
- Do other linguistic models better account for asymmetries?
For lexical access in spoken word comprehension, the listener must extract information from the auditory input signal and match it to stored mental representations. To explain this process, it is crucial to characterize what is mentally represented, what information is extracted from the auditory signal, and how the extracted information is compared to stored representations.
An established axiom is that words consist of phonemes, and that phonemes are comprised of distinctive features. However, models differ as to whether the phonemes are encoded by abstract, symbolic features or whether they encode probabilistic aspects of the corresponding physical signal.
Within symbolic approaches, additional issues are debated: whether the features encode binary values versus absolute (unary, or privative) feature values. For example, a binary feature model would represent a vocal cord vibration as [±voice], illustrated by /d/ being differentiated from /t/ by being specified as [+voice] vs. [-voice], respectively. In contrast, unary or monovalent feature models entail that only the presence of a feature counts, while its opposite does not exist, illustrated by /d/ having the feature [voice] while /t/ not having any particular feature referring to voicing (or vice versa depending on language).
Within theoretical phonology, the so-called underspecification has been motivated by asymmetries in assimilation rules: a phoneme that is underspecified for a particular feature would more readily absorb values from adjacent segments. With respect to lexical access, underspecification predicts that words with underspecified segments will be more ambiguous with respect to the acoustic input signal than words with fully specified representations.
The field of cognitive neuroscience is an additional source of evidence for examining these issues, as first demonstrated in the work of Aditi Lahiri (one of the Topic Editors of this Research Topic) and her colleagues. Using the Mismatch Negativity (MMN) component, an automatic change detection response of the brain, they observed asymmetric amplitude modulations of the MMN response to phoneme contrasts that could be explained by the underspecification theory.
Their predictions have been further supported by several subsequent studies of phoneme contrasts across languages. However, questions remain about the true nature of MMN asymmetries in phonology.
We are inviting authors to submit research articles to the present Research Topic, addressing but not limited to the following themes:
- Are asymmetric MMNs caused by non-linguistic rather than linguistic factors?
- Are asymmetric MMN effects driven by basic neuronal response differences between contrasting pairs of sounds as opposed to abstract grammatical differences?
- Do asymmetries depend on how the MMN is calculated (identity MMN, vs. other paradigms)?
- Is the asymmetry related to differences in recovery from refractoriness of the N1 component?
- Could asymmetries arise from differences in the standard response as a result of habituation?
- Could asymmetries arise from differences in phoneme or word usage frequency?
- Could asymmetries arise from differences in phonotactic probability?
- What is the interpretation of differences in MMN latency vs. MMN amplitude?
- Could MMN asymmetries arise from spatial differences in the MMN to each phoneme?
- Do other linguistic models better account for asymmetries?