The rapid development of digital technologies has enabled the construction of complex learning environments (e.g., online learning, extended reality, game-based learning) where the interactions between learners and technologies are richly featured by multimodal stimuli, experiences, and learning analytics. By activating diverse senses such as visual, auditory, olfactory, tactile, and kinesthetic, multimodality in learning is known to benefit learners with enhanced motivation, engagement, and performance. Moreover, multimodal data from multiple sources (e.g., eye-tracking, wearable cameras, gesture recognition systems, infrared imaging, biosensors, etc.) can reveal hidden patterns in behavior, cognition, emotion, and interaction, and thus inform the design and assessment of learning innovations. To capture the unique benefits of multimodal learning environments, research attention needs to be paid to its instructional design, technological affordances, and analytical evaluation.
Despite the great potential of multimodality in learning, it is still unclear how pedagogies, technologies, and analytics should be integrated to create effective multimodal learning environments. For instance, how to activate and synthesize multiple senses to maximize fidelity and vividness of learning experiences remains largely unknown, especially in novel educational settings such as virtual reality or metaverse. It is also challenging to fully harness the power of multimodal data without innovative approaches for data mining, analysis, and interpretation. The lack of multimodal learning interventions has also led to insufficient impact studies on their effectiveness and influencing factors. Consequently, the following questions are in need of systematic investigation: How can effective multimodal learning environments be designed? How does multisensory information fuse in creating multimodal learning experience? Which pedagogies can enhance multimodal learning outcomes? How can multimodal data be used to inform learning diagnosis, design, and assessment?
In this Research Topic, we aim to solicit a range of original research articles, systematic reviews and meta-analysis to advance our understanding of multimodal learning in terms of instructional design, technological innovation, and multimodality assessment. In particular, we are interested in interdisciplinary research that draws from various academic fields such as psychology, education, neurosciences, data sciences, computer sciences, and engineering.
Topics of interest include but are not limited to:
• Multimodal learning design and assessment
• Cognition, emotion, and behavior in multimodal learning environment
• Multimodal data mining and utilization
• Multimodal learning theories
• Multimodal learning technologies and applications
• Extended reality (VR/AR/MR)
The rapid development of digital technologies has enabled the construction of complex learning environments (e.g., online learning, extended reality, game-based learning) where the interactions between learners and technologies are richly featured by multimodal stimuli, experiences, and learning analytics. By activating diverse senses such as visual, auditory, olfactory, tactile, and kinesthetic, multimodality in learning is known to benefit learners with enhanced motivation, engagement, and performance. Moreover, multimodal data from multiple sources (e.g., eye-tracking, wearable cameras, gesture recognition systems, infrared imaging, biosensors, etc.) can reveal hidden patterns in behavior, cognition, emotion, and interaction, and thus inform the design and assessment of learning innovations. To capture the unique benefits of multimodal learning environments, research attention needs to be paid to its instructional design, technological affordances, and analytical evaluation.
Despite the great potential of multimodality in learning, it is still unclear how pedagogies, technologies, and analytics should be integrated to create effective multimodal learning environments. For instance, how to activate and synthesize multiple senses to maximize fidelity and vividness of learning experiences remains largely unknown, especially in novel educational settings such as virtual reality or metaverse. It is also challenging to fully harness the power of multimodal data without innovative approaches for data mining, analysis, and interpretation. The lack of multimodal learning interventions has also led to insufficient impact studies on their effectiveness and influencing factors. Consequently, the following questions are in need of systematic investigation: How can effective multimodal learning environments be designed? How does multisensory information fuse in creating multimodal learning experience? Which pedagogies can enhance multimodal learning outcomes? How can multimodal data be used to inform learning diagnosis, design, and assessment?
In this Research Topic, we aim to solicit a range of original research articles, systematic reviews and meta-analysis to advance our understanding of multimodal learning in terms of instructional design, technological innovation, and multimodality assessment. In particular, we are interested in interdisciplinary research that draws from various academic fields such as psychology, education, neurosciences, data sciences, computer sciences, and engineering.
Topics of interest include but are not limited to:
• Multimodal learning design and assessment
• Cognition, emotion, and behavior in multimodal learning environment
• Multimodal data mining and utilization
• Multimodal learning theories
• Multimodal learning technologies and applications
• Extended reality (VR/AR/MR)