Human cognition is inherently situated and embodied: cognitive processes emerge in talk-in-interaction as they are shaped in the context of a real-world environment, where the body is used as a medium for perception, production, and action. Similarly, talk-in-interaction is quintessentially multimodal: meaning is built across modalities (auditory and visual) which are mobilized by speakers in the course of their interactive practices. In particular, a growing number of studies have highlighted the central role of manual gestures in speech and interaction, where gestures may serve both cognitive and interactive functions: they highlight motor information, disambiguate spatial object descriptions, influence listeners’ comprehension, or enhance learning, among other practices. Gesturing thus plays a central role in how humans interact, think, and learn.
The topic of gesture and multimodality is becoming increasingly popular in the cognitive sciences, whereby researchers examine the complex relationship between gesture and cognition; however, cognitivist approaches tend to mostly revolve around experimentations or computer simulations, without necessarily grounding cognition within an interactive framework, in natural face-to-face interaction. This Research Topic aims to bring together contributions investigating cognition, as manifested in gesture, at the heart of human interaction, across various settings and situations (e.g., parent-child interaction, peer relationships, medical communication, institutional encounters, task-oriented dialogues, classroom interactions, etc.) from the perspective of production and/or comprehension, with a focus on the central role of multimodality.
The Research Topic welcomes contributions on cognition in multimodal interaction from a wide range of fields (e.g., linguistics, psychology, sociology, cognitive sciences, gesture studies, social interaction, and education). Topics of particular interest include, but are not limited to, the following:
• embodiment and embodied cognition
• gesture in first and second language acquisition
• spatial cognition
• visual perception of gesture
• cognition-for-interaction
• language learning
• developmental language disorders
• new methodological approaches to gesture in cognition and interaction.
Contributions from postgraduate students are also welcome.
Keywords:
cognition, gesture, interaction, learning, embodiment
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Human cognition is inherently situated and embodied: cognitive processes emerge in talk-in-interaction as they are shaped in the context of a real-world environment, where the body is used as a medium for perception, production, and action. Similarly, talk-in-interaction is quintessentially multimodal: meaning is built across modalities (auditory and visual) which are mobilized by speakers in the course of their interactive practices. In particular, a growing number of studies have highlighted the central role of manual gestures in speech and interaction, where gestures may serve both cognitive and interactive functions: they highlight motor information, disambiguate spatial object descriptions, influence listeners’ comprehension, or enhance learning, among other practices. Gesturing thus plays a central role in how humans interact, think, and learn.
The topic of gesture and multimodality is becoming increasingly popular in the cognitive sciences, whereby researchers examine the complex relationship between gesture and cognition; however, cognitivist approaches tend to mostly revolve around experimentations or computer simulations, without necessarily grounding cognition within an interactive framework, in natural face-to-face interaction. This Research Topic aims to bring together contributions investigating cognition, as manifested in gesture, at the heart of human interaction, across various settings and situations (e.g., parent-child interaction, peer relationships, medical communication, institutional encounters, task-oriented dialogues, classroom interactions, etc.) from the perspective of production and/or comprehension, with a focus on the central role of multimodality.
The Research Topic welcomes contributions on cognition in multimodal interaction from a wide range of fields (e.g., linguistics, psychology, sociology, cognitive sciences, gesture studies, social interaction, and education). Topics of particular interest include, but are not limited to, the following:
• embodiment and embodied cognition
• gesture in first and second language acquisition
• spatial cognition
• visual perception of gesture
• cognition-for-interaction
• language learning
• developmental language disorders
• new methodological approaches to gesture in cognition and interaction.
Contributions from postgraduate students are also welcome.
Keywords:
cognition, gesture, interaction, learning, embodiment
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.