
95% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Neurosci.
Sec. Auditory Cognitive Neuroscience
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1482828
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Understanding speech in background noise is a challenging task, especially if the signal is also distorted. In a series of previous studies, we have shown that comprehension can improve if simultaneously to the auditory speech, the person receives speech-extracted low-frequency signals on fingertips. The effect increases after short audio-tactile speech training. Here we use resting-state functional magnetic resonance imaging, measuring spontaneous low-frequency oscillations in the brain while at rest, to assess training-induced changes in functional connectivity. We show enhanced functional connectivity within a right-hemisphere cluster corresponding to the middle temporal motion area (MT), and the extrastriate body area (EBA), and lateral occipital cortex (LOC), which before training is found to be more connected to bilateral dorsal anterior insula. Furthermore, early visual areas are found to switch from increased connectivity with the auditory cortex before, to increased connectivity with a sensory/multisensory association parietal hub, contralateral to the palm receiving vibrotactile inputs, after. Also, the right sensorimotor cortex, including finger representations, is more connected internally after training. The results altogether can be interpreted within two main complementary frameworks. One, speech-specific, relates to the pre-existing brain connectivity for audio-visual speech processing, including early visual, motion and body regions for lip-reading and gesture analysis in difficult acoustic conditions, which the new audio-tactile speech network might be built upon. The other framework refers to spatial/body awareness and audio-tactile integration, both necessary for performing the task, including in the revealed parietal and insular regions. It is possible that an extended training period is necessary to directly strengthen functional connections between the auditory and the sensorimotor brain regions for the utterly novel multisensory task. The results contribute to a better understanding of the largely unknown neuronal mechanisms underlying tactile speech benefits for speech comprehension and may be relevant for rehabilitation in the hearingimpaired population.
Keywords: speech comprehension, Tactile aid, Multisensory training, fMRI, Resting-State Functional MRI, Cochlear Implants
Received: 18 Aug 2024; Accepted: 04 Apr 2025.
Copyright: © 2025 Ciesla, Wolak and Amedi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Katarzyna Ciesla, Reichman University, Herzliya, Israel
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.