AUTHOR=O’Sullivan Aisling E. , Crosse Michael J. , Di Liberto Giovanni M. , Lalor Edmund C. TITLE=Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading JOURNAL=Frontiers in Human Neuroscience VOLUME=10 YEAR=2017 URL=https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2016.00679 DOI=10.3389/fnhum.2016.00679 ISSN=1662-5161 ABSTRACT=
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions.