It is now commonly accepted that the human language faculty is created, processed, and stored in the brain, but can be expressed through various channels (i.e., the vocal tract for spoken languages, the upper limbs and face for signed languages) and perceived through different channels as well (i.e., the ears as a main source for spoken languages, the eyes as a main medium for signed languages, and the hands as a main medium for tactile sign languages). The overwhelming evidence shows that despite differences in their modalities of expression and perception, such languages are similar in underlying structure in that they are organized at the sub-lexical, lexical, and syntactic levels, are acquired along similar timelines, are processed in the same brain regions, and are impaired in similar ways when the language centers of the brain are damaged.
Despite crucial similarities between signed and spoken languages, the role of the language modalities in acquisition and processing must be more deeply understood. In recent years, research has probed the ways in which learners may exploit the multimodal properties of spoken-language discourse; how, for example, does gesture contribute to the acquisition of spoken languages? The visual-gestural modality affords abundant iconic resources to linguistic and gestural forms that occupy that modality: What is the role of iconicity in the acquisition of signed languages? Does the iconicity available within the acoustic signal play a role in the acquisition of spoken languages? Work on atypically-developing children has shown, for example, that the underlying deficits associated with autism spectrum disorder yield different linguistic markers in sign and speech. Other research has addressed differences in the variability of the speech and sign signals and consequent differences in the problems that learners—whether children, adults or computers—face in identifying lexical items. Still other research has explored motoric differences in speech and sign articulation, differences in imitation in sign and speech, and evolutionary perspectives on the modalities of language. Although we, the editors, have focused on the signed languages of the Deaf throughout our careers, we seek papers that explore the role of modalities in the acquisition of spoken languages, of visual signed languages, and of tactile signed languages.
This Research Topic seeks papers from scholars who are interested in the role that modalities play in language acquisition and in language processing, specifically focusing on studies that can illuminate differences in acquisition and processing that may be the result of modality differences. Papers can focus on the constraints that learners confront in the different language modalities, or on the resources that the different modalities afford learners. Submissions may include works on spoken, signed, tactile, or written languages, in children or adults, first- or second-language learners, and learners with typical or atypical development.
Possible topics may include, but are not limited to:
• Perspective-taking in language
• Embodiment of language
• Iconicity in signed and spoken languages
• Evolutionary perspectives on the modalities of language
• Variability in the linguistic signal in sign and speech
• Developmental perspectives on modality/modalities/multimodality
• Language processing and multimodality
• Representations of, and use of, space in language
• Imitation strategies in sign and speech
• Impact of gesture and multimodality on language acquisition and learning
• Computer recognition of sign and speech
• Experimental approaches in language acquisition and processing studies
• Atypical development of language in sign and speech
It is now commonly accepted that the human language faculty is created, processed, and stored in the brain, but can be expressed through various channels (i.e., the vocal tract for spoken languages, the upper limbs and face for signed languages) and perceived through different channels as well (i.e., the ears as a main source for spoken languages, the eyes as a main medium for signed languages, and the hands as a main medium for tactile sign languages). The overwhelming evidence shows that despite differences in their modalities of expression and perception, such languages are similar in underlying structure in that they are organized at the sub-lexical, lexical, and syntactic levels, are acquired along similar timelines, are processed in the same brain regions, and are impaired in similar ways when the language centers of the brain are damaged.
Despite crucial similarities between signed and spoken languages, the role of the language modalities in acquisition and processing must be more deeply understood. In recent years, research has probed the ways in which learners may exploit the multimodal properties of spoken-language discourse; how, for example, does gesture contribute to the acquisition of spoken languages? The visual-gestural modality affords abundant iconic resources to linguistic and gestural forms that occupy that modality: What is the role of iconicity in the acquisition of signed languages? Does the iconicity available within the acoustic signal play a role in the acquisition of spoken languages? Work on atypically-developing children has shown, for example, that the underlying deficits associated with autism spectrum disorder yield different linguistic markers in sign and speech. Other research has addressed differences in the variability of the speech and sign signals and consequent differences in the problems that learners—whether children, adults or computers—face in identifying lexical items. Still other research has explored motoric differences in speech and sign articulation, differences in imitation in sign and speech, and evolutionary perspectives on the modalities of language. Although we, the editors, have focused on the signed languages of the Deaf throughout our careers, we seek papers that explore the role of modalities in the acquisition of spoken languages, of visual signed languages, and of tactile signed languages.
This Research Topic seeks papers from scholars who are interested in the role that modalities play in language acquisition and in language processing, specifically focusing on studies that can illuminate differences in acquisition and processing that may be the result of modality differences. Papers can focus on the constraints that learners confront in the different language modalities, or on the resources that the different modalities afford learners. Submissions may include works on spoken, signed, tactile, or written languages, in children or adults, first- or second-language learners, and learners with typical or atypical development.
Possible topics may include, but are not limited to:
• Perspective-taking in language
• Embodiment of language
• Iconicity in signed and spoken languages
• Evolutionary perspectives on the modalities of language
• Variability in the linguistic signal in sign and speech
• Developmental perspectives on modality/modalities/multimodality
• Language processing and multimodality
• Representations of, and use of, space in language
• Imitation strategies in sign and speech
• Impact of gesture and multimodality on language acquisition and learning
• Computer recognition of sign and speech
• Experimental approaches in language acquisition and processing studies
• Atypical development of language in sign and speech