Objectives: Surface electromyography (sEMG) is a standard tool in clinical routine and clinical or psychosocial experiments also including speech research and orthodontics to measure the activity of selected facial muscles to objectify facial movements during specific facial exercises or experiments with emotional expressions. Such muscle-specific approaches neglect that facial muscles act more as an interconnected network than as single facial muscles for specific movements. What is missing is an optimal sEMG setting allowing a synchronous measurement of the activity of all facial muscles as a whole.
Methods: A total of 36 healthy adult participants (53% women, 18–67 years) were included. Electromyograms were recorded from both sides of the face using an arrangement of electrodes oriented by the underlying topography of the facial muscles (Fridlund scheme) and simultaneously by a geometric and symmetrical arrangement on the face (Kuramoto scheme). The participants performed a standard set of different facial movement tasks. Linear mixed-effects models and adjustment for multiple comparisons were used to evaluate differences between the facial movement tasks, separately for both applied schemes. Data analysis utilized sEMG amplitudes and also their maximum-normalized values to account for amplitude differences between the different facial movements.
Results: Surface electromyography activation characteristics showed systematic regional distribution patterns of facial muscle activation for both schemes with very low interindividual variability. The statistical significance to discriminate between the different sEMG patterns was good for both schemes (significant comparisons for sEMG amplitudes: 87.3%, both schemes, normalized values: 90.9%, Fridlund scheme, 94.5% Kuramoto scheme), but the Kuramoto scheme performed considerably superior.
Conclusion: Facial movement tasks evoke specific patterns in the complex network of facial muscles rather than activating single muscles. A geometric and symmetrical sEMG recording from the entire face seems to allow more specific detection of facial muscle activity patterns during facial movement tasks. Such sEMG patterns should be explored in more clinical and psychological experiments in the future.
Spatial cognition is related to academic achievement in science, technology, engineering, and mathematics (STEM) domains. Neuroimaging studies suggest that brain regions’ activation might be related to the general cognitive effort while solving mental rotation tasks (MRT). In this study, we evaluate the mental effort of children performing MRT tasks by measuring brain activation and pupil dilation. We use functional near-infrared spectroscopy (fNIRS) concurrently to collect brain hemodynamic responses from children’s prefrontal cortex (PFC) and an Eye-tracking system to measure pupil dilation during MRT. Thirty-two healthy students aged 9–11 participated in this experiment. Behavioral measurements such as task performance on geometry problem-solving tests and MRT scores were also collected. The results were significant positive correlations between the children’s MRT and geometry problem-solving test scores. There are also significant positive correlations between dorsolateral PFC (dlPFC) hemodynamic signals and visuospatial task performances (MRT and geometry problem-solving scores). Moreover, we found significant activation in the amplitude of deoxy-Hb variation on the dlPFC and that pupil diameter increased during the MRT, suggesting that both physiological responses are related to mental effort processes during the visuospatial task. Our findings indicate that children with more mental effort under the task performed better. The multimodal approach to monitoring students’ mental effort can be of great interest in providing objective feedback on cognitive resource conditions and advancing our comprehension of the neural mechanisms that underlie cognitive effort. Hence, the ability to detect two distinct mental states of rest or activation of children during the MRT could eventually lead to an application for investigating the visuospatial skills of young students using naturalistic educational paradigms.
The present study uses EEG time-frequency representations (TFRs) with a Flanker task to investigate if and how individual differences in bilingual language experience modulate neurocognitive outcomes (oscillatory dynamics) in two bilingual group types: late bilinguals (L2 learners) and early bilinguals (heritage speakers—HSs). TFRs were computed for both incongruent and congruent trials. The difference between the two (Flanker effect vis-à-vis cognitive interference) was then (1) compared between the HSs and the L2 learners, (2) modeled as a function of individual differences with bilingual experience within each group separately and (3) probed for its potential (a)symmetry between brain and behavioral data. We found no differences at the behavioral and neural levels for the between-groups comparisons. However, oscillatory dynamics (mainly theta increase and alpha suppression) of inhibition and cognitive control were found to be modulated by individual differences in bilingual language experience, albeit distinctly within each bilingual group. While the results indicate adaptations toward differential brain recruitment in line with bilingual language experience variation overall, this does not manifest uniformly. Rather, earlier versus later onset to bilingualism—the bilingual type—seems to constitute an independent qualifier to how individual differences play out.
Virtual reality environments offer great opportunities to study the performance of brain-computer interfaces (BCIs) in real-world contexts. As real-world stimuli are typically multimodal, their neuronal integration elicits complex response patterns. To investigate the effect of additional auditory cues on the processing of visual information, we used virtual reality to mimic safety-related events in an industrial environment while we concomitantly recorded electroencephalography (EEG) signals. We simulated a box traveling on a conveyor belt system where two types of stimuli – an exploding and a burning box – interrupt regular operation. The recordings from 16 subjects were divided into two subsets, a visual-only and an audio-visual experiment. In the visual-only experiment, the response patterns for both stimuli elicited a similar pattern – a visual evoked potential (VEP) followed by an event-related potential (ERP) over the occipital-parietal lobe. Moreover, we found the perceived severity of the event to be reflected in the signal amplitude. Interestingly, the additional auditory cues had a twofold effect on the previous findings: The P1 component was significantly suppressed in the case of the exploding box stimulus, whereas the N2c showed an enhancement for the burning box stimulus. This result highlights the impact of multisensory integration on the performance of realistic BCI applications. Indeed, we observed alterations in the offline classification accuracy for a detection task based on a mixed feature extraction (variance, power spectral density, and discrete wavelet transform) and a support vector machine classifier. In the case of the explosion, the accuracy slightly decreased by –1.64% p. in an audio-visual experiment compared to the visual-only. Contrarily, the classification accuracy for the burning box increased by 5.58% p. when additional auditory cues were present. Hence, we conclude, that especially in challenging detection tasks, it is favorable to consider the potential of multisensory integration when BCIs are supposed to operate under (multimodal) real-world conditions.
Movies and narratives are increasingly utilized as stimuli in functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) studies. Emotional reactions of subjects, what they pay attention to, what they memorize, and their cognitive interpretations are all examples of inner experiences that can differ between subjects during watching of movies and listening to narratives inside the scanner. Here, we review literature indicating that behavioral measures of inner experiences play an integral role in this new research paradigm via guiding neuroimaging analysis. We review behavioral methods that have been developed to sample inner experiences during watching of movies and listening to narratives. We also review approaches that allow for joint analyses of the behaviorally sampled inner experiences and neuroimaging data. We suggest that building neurophenomenological frameworks holds potential for solving the interrelationships between inner experiences and their neural underpinnings. Finally, we tentatively suggest that recent developments in machine learning approaches may pave way for inferring different classes of inner experiences directly from the neuroimaging data, thus potentially complementing the behavioral self-reports.
In many situations, decision-making behaviors are mostly composed of team patterns (i.e., more than two persons). However, brain-based models that inform how team interactions contribute and impact team collaborative decision-making (TCDM) behavior, is lacking. To examine the neural substrates activated during TCDM in realistic, interpersonal interaction contexts, dyads were asked to model TCDM toward their opponent, in a multi-person prisoner’s dilemma game, while neural activity was measured using functional near infrared spectroscopy. These experiments resulted in two main findings. First, there are different neural substrates between TCDM and ISDM, which were modulated by social environmental cues. i.e., the low incentive reward yielded higher activation within the left inferior frontal gyrus (IFG), in individual separately decision-making (ISDM) stage while the dorsolateral prefrontal cortex (DLPFC) and the middle frontopolar area was activated in TCDM stage. The high incentive reward evoked a higher interbrain synchrony (IBS) value in the right IFG in TCDM stage. Second, males showed higher activation in the DLPFC and the middle frontopolar area during ISDM, while females evoked higher IBS in the right IFG during TCDM. These sex effects suggest that in individual social dilemma situations, males and females may separately depend on non-social and social cognitive ability to make decisions, while in the social interaction situations of TCDM, females may depend on both social and non-social cognitive abilities. This study provide a compelling basis and interesting perspective for future neuroscience work of TCDM behaviors.