Language encompasses both auditory and visual cues that are highly relevant to language learning. Human communication and interaction relies on the acoustic speech stream produced, as well as on language related visual information, most prominently the hands and the mouth and eye regions in the face. Infants and toddlers are able to take advantage of the different sensory perceptual cues, and to integrate auditory and visual information while processing spoken language. COVID-19 has affected human communication through the pervasive use of masks. Masks degrade the quality of the speech signal, while also rendering facial cues to language inaccessible, particularly those pertaining to the mouth region. Therefore, children born since the beginnings of 2020 have been exposed to sets of auditory and/or visual cues to language that differ from those commonly available in the input. It is presently unknown what the effects of the ubiquitous use of masks are on language development.
The current pandemic offers an unprecedented opportunity to study how auditory and visual cues, as well as their interplay and integration, shape language development. Recent advances have shown that audiovisual speech processing supports language acquisition, and changes in selective audiovisual attention are linked to language development, language (un)familiarity, speaker characteristics, or increased processing effort. Frequency-based and prosodic cues are the dominant auditory cues used by infants to parse language. The integration of these cues with facial information, in typical and atypical development, is a thriving research field. For adults, masking is challenging for signers, the hearing impaired, as well as the normal hearing, and its consequences are modulated by speaking style. This Research Topic aims at promoting innovative research on the effects of face-masked input on language development. Given that masks alter the acoustics of speech and block or obscure facial information, they may deprive the young learner of some of the necessary cues for language processing at different linguistic levels (phonetic, prosodic, lexical, syntactic, semantic), and at the social, interactional, or emotional levels. Knowledge of the impacts, adjustments, or strategies to produce and perceive face-masked input will contribute to further our understanding of the mechanisms underlying language development.
We welcome contributions that address face-masked input and language development to tackle fundamental issues in the phonetics of speech cues, in auditory speech processing, in visual and audiovisual language processing, and language learning. A wide range of participants may be targeted, including individuals with cognitive or sensory impairments, and typically or atypically developing children, in diverse language contexts comprising typologically different spoken and sign languages. We encourage authors from different perspectives and disciplines, such as linguistics, speech acoustics, hearing sciences, perception science, clinical linguistics and health, psychology, cognitive science, or neurosciences, to submit Original Research articles, Brief Research Reports, and Case Reports. Submissions combining theoretical questions with experimental approaches are particularly welcome.
Language encompasses both auditory and visual cues that are highly relevant to language learning. Human communication and interaction relies on the acoustic speech stream produced, as well as on language related visual information, most prominently the hands and the mouth and eye regions in the face. Infants and toddlers are able to take advantage of the different sensory perceptual cues, and to integrate auditory and visual information while processing spoken language. COVID-19 has affected human communication through the pervasive use of masks. Masks degrade the quality of the speech signal, while also rendering facial cues to language inaccessible, particularly those pertaining to the mouth region. Therefore, children born since the beginnings of 2020 have been exposed to sets of auditory and/or visual cues to language that differ from those commonly available in the input. It is presently unknown what the effects of the ubiquitous use of masks are on language development.
The current pandemic offers an unprecedented opportunity to study how auditory and visual cues, as well as their interplay and integration, shape language development. Recent advances have shown that audiovisual speech processing supports language acquisition, and changes in selective audiovisual attention are linked to language development, language (un)familiarity, speaker characteristics, or increased processing effort. Frequency-based and prosodic cues are the dominant auditory cues used by infants to parse language. The integration of these cues with facial information, in typical and atypical development, is a thriving research field. For adults, masking is challenging for signers, the hearing impaired, as well as the normal hearing, and its consequences are modulated by speaking style. This Research Topic aims at promoting innovative research on the effects of face-masked input on language development. Given that masks alter the acoustics of speech and block or obscure facial information, they may deprive the young learner of some of the necessary cues for language processing at different linguistic levels (phonetic, prosodic, lexical, syntactic, semantic), and at the social, interactional, or emotional levels. Knowledge of the impacts, adjustments, or strategies to produce and perceive face-masked input will contribute to further our understanding of the mechanisms underlying language development.
We welcome contributions that address face-masked input and language development to tackle fundamental issues in the phonetics of speech cues, in auditory speech processing, in visual and audiovisual language processing, and language learning. A wide range of participants may be targeted, including individuals with cognitive or sensory impairments, and typically or atypically developing children, in diverse language contexts comprising typologically different spoken and sign languages. We encourage authors from different perspectives and disciplines, such as linguistics, speech acoustics, hearing sciences, perception science, clinical linguistics and health, psychology, cognitive science, or neurosciences, to submit Original Research articles, Brief Research Reports, and Case Reports. Submissions combining theoretical questions with experimental approaches are particularly welcome.