In everyday life, we integrate multiple inputs from different sensory modalities to plan and control actions aimed at exploring the surrounding environment. Through repeated interactions with the physical world, the Central Nervous System builds motor-sensory couplings underpinning the ability to predict the sensory consequences of our actions, and to monitor task execution. These processes allow us, for instance, to accurately reach for an elevator’s button, to transfer a cellphone from one hand to the other without dropping it, as well as to learn the use of a new tool. Inputs from visual, somatosensory, auditory and vestibular receptors inform us about intrinsic (e.g., size) and extrinsic (e.g., location) object features, as well as the state of our body (e.g., limb position). Motor control, on the other hand, allows the organization of effective patterns of actions aimed at achieving the goal while compensating for external perturbations.
Over the last twenty years, there has been a growing interest on how this multisensory-motor integration process unfolds to enable planning and execution of reaching and grasping. However, while psychophysical studies have provided evidence on how multisensory inputs are optimally integrated to create a coherent percept, it remains unclear how multiple sources of sensory information are coupled with motor commands to shape motor execution in dexterous tasks. From a scientific standpoint, filling this gap would allow a deep understanding of how humans can cope with different signal-to-noise ratios emerging from the combination of sensor inputs and perform accurate actions. From a more applicative point of view, this line of research would benefit the rehabilitative attempts to restore or improve dexterity in individuals affected by sensorimotor impairments, e.g., stroke survivors. Similarly, this framework would provide insights for the design of robotic devices aimed at enhancing or performing human-like interactive actions. In particular, in this framework the following questions remain to be investigated:
1) How are multisensory information about the to-be-reached/grasped object integrated with motor commands?
2) How is multisensory information about the body/limbs position integrated and used to plan and execute reach/grasp movements?
3) How are inputs from different sensory modalities integrated during object manipulation?
4) What are the neural representations of multisensory reaching, grasping, and manipulation?
This Research Topic on multisensory-motor integration aims to provide novel insights on the processes underlying reaching and grasping behaviors in multisensory conditions of object manipulation. This framework is meant to resemble the properties of naturalistic environments where sensory cues are not isolated but optimally combined and intertwined with action. Authors are encouraged to submit papers addressing behavioral and neurophysiological exploration (from EMG to EEG and fMRI) aimed at understanding the contribution of multisensory information, and its coupling with motor commands, for reaching, grasping and object manipulation in humans.
In everyday life, we integrate multiple inputs from different sensory modalities to plan and control actions aimed at exploring the surrounding environment. Through repeated interactions with the physical world, the Central Nervous System builds motor-sensory couplings underpinning the ability to predict the sensory consequences of our actions, and to monitor task execution. These processes allow us, for instance, to accurately reach for an elevator’s button, to transfer a cellphone from one hand to the other without dropping it, as well as to learn the use of a new tool. Inputs from visual, somatosensory, auditory and vestibular receptors inform us about intrinsic (e.g., size) and extrinsic (e.g., location) object features, as well as the state of our body (e.g., limb position). Motor control, on the other hand, allows the organization of effective patterns of actions aimed at achieving the goal while compensating for external perturbations.
Over the last twenty years, there has been a growing interest on how this multisensory-motor integration process unfolds to enable planning and execution of reaching and grasping. However, while psychophysical studies have provided evidence on how multisensory inputs are optimally integrated to create a coherent percept, it remains unclear how multiple sources of sensory information are coupled with motor commands to shape motor execution in dexterous tasks. From a scientific standpoint, filling this gap would allow a deep understanding of how humans can cope with different signal-to-noise ratios emerging from the combination of sensor inputs and perform accurate actions. From a more applicative point of view, this line of research would benefit the rehabilitative attempts to restore or improve dexterity in individuals affected by sensorimotor impairments, e.g., stroke survivors. Similarly, this framework would provide insights for the design of robotic devices aimed at enhancing or performing human-like interactive actions. In particular, in this framework the following questions remain to be investigated:
1) How are multisensory information about the to-be-reached/grasped object integrated with motor commands?
2) How is multisensory information about the body/limbs position integrated and used to plan and execute reach/grasp movements?
3) How are inputs from different sensory modalities integrated during object manipulation?
4) What are the neural representations of multisensory reaching, grasping, and manipulation?
This Research Topic on multisensory-motor integration aims to provide novel insights on the processes underlying reaching and grasping behaviors in multisensory conditions of object manipulation. This framework is meant to resemble the properties of naturalistic environments where sensory cues are not isolated but optimally combined and intertwined with action. Authors are encouraged to submit papers addressing behavioral and neurophysiological exploration (from EMG to EEG and fMRI) aimed at understanding the contribution of multisensory information, and its coupling with motor commands, for reaching, grasping and object manipulation in humans.