- 1Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
- 2Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, EPFL, Geneva, Switzerland
- 3Grenoble Institut Neurosciences, InsermU1216, CHU Grenoble, Grenoble, France
- 4Université Clermont Auvergne, CNRS, LaPSCo, CHU Clermont-Ferrand, Clermont-Ferrand, France
- 5Hôpital Fondation Rothschild, I3N, Paris, France
- 6Université de Paris, INCC UMR 8002, CNRS, Paris, France
- 7UMR 1253 iBrain, Université de Tours, Inserm, Tours, France
Visual processing is thought to function in a coarse-to-fine manner. Low spatial frequencies (LSF), conveying coarse information, would be processed early to generate predictions. These LSF-based predictions would facilitate the further integration of high spatial frequencies (HSF), conveying fine details. The predictive role of LSF might be crucial in automatic face processing, where high performance could be explained by an accurate selection of clues in early processing. In the present study, we used a visual Mismatch Negativity (vMMN) paradigm by presenting an unfiltered face as standard stimulus, and the same face filtered in LSF or HSF as deviant, to investigate the predictive role of LSF vs. HSF during automatic face processing. If LSF are critical for predictions, we hypothesize that LSF deviants would elicit less prediction error (i.e., reduced mismatch responses) than HSF deviants. Results show that both LSF and HSF deviants elicited a mismatch response compared with their equivalent in an equiprobable sequence. However, in line with our hypothesis, LSF deviants evoke significantly reduced mismatch responses compared to HSF deviants, particularly at later stages. The difference in mismatch between HSF and LSF conditions involves posterior areas and right fusiform gyrus. Overall, our findings suggest a predictive role of LSF during automatic face processing and a critical involvement of HSF in the fusiform during the conscious detection of changes in faces.
1. Introduction
1.1. Spatial Frequency in Face Processing
Visual processing of faces is a complex mechanism, relying on high and low level cognitive functions. Among the later, from the first steps of visual perception of faces, visual recognition involves spatial frequencies (SF) processing (e.g., Vuilleumier et al., 2003; Goffaux and Rossion, 2006; Goffaux et al., 2011). SF refer to a spectrum of spatial information in an image, expressed as a number of cycles per degree (cpd) of visual angle and derived from the Fourier transform (Morrison and Schyns, 2001; Park et al., 2012; Bachmann, 2016). While low spatial frequencies (LSF) convey coarse information mainly through the dorsal stream, high spatial frequencies (HSF) convey fine details through the ventral stream (see Skottun, 2015).
How are LSF and HSF involved in processing faces? This question has been extensively studied, but results are mixed and not conclusive (for reviews see Ruiz-Soler and Beltran, 2006; Jeantet et al., 2018). Indeed, SF preference depends on the task (Schyns and Oliva, 1999; Ruiz-Soler and Beltran, 2006; Smith and Merlusca, 2014), at least in behavioral studies. For example, emotion detection and gender categorization would not be carried by the same SF (Schyns and Oliva, 1999). Emotion categorization would rely more on LSF, particularly at the early stages (e.g., Schyns and Oliva, 1999; Mermillod et al., 2005, 2010; Wang et al., 2021; but Deruelle et al., 2004; Jennings et al., 2017). Nevertheless, this pattern can be reversed with additional task constraints, such as an interference effect (Lacroix et al., 2021; Shankland et al., 2021; but Beffara et al., 2015) or the complexity of the emotion (Cassidy et al., 2021), which leads to rely more on HSF. The type of emotional content (Kumar and Srinivasan, 2011; Wang et al., 2015) as well as the awareness of the stimulus (De Gardelle and Kouider, 2010), but also individuals differences (Dube et al., 2014; Langner et al., 2015), would also influence the preference in SF processing. However, there is a large body of evidence using neuroimagery indicating that faces are usually processed in a coarse-to-fine manner, with LSF being processed faster than HSF (e.g., Halit et al., 2006; Hegdé, 2008; Nakashima et al., 2008; Vlamings et al., 2009; Goffaux et al., 2011; Tian et al., 2018; Petras et al., 2019, 2021). The efficiency of coarse-to-fine processing has also been demonstrated in computer vision (e.g., Zhou et al., 2013; Zhang et al., 2014).
Within a predictive coding framework (Rao and Ballard, 1999), Bar et al. (2006) suggested a neurocognitive model of the coarse-to-fine processing. LSF would be quickly extracted and transmitted to the orbitofrontal cortex where predictions would be formed (Bar et al., 2006; Bar, 2007). These predictions would then be sent back to infero-temporal areas (Kveraga et al., 2007), guiding the processing of details extracted from HSF information by a top-down process and facilitating fast recognition (Bar et al., 2006; Bar, 2007). This predictive brain model of visual perception is supported by a study in magnetoencephalography showing activation of the orbitofrontal cortex beginning at 80 ms and synchronizing with the fusiform gyrus around 130 ms, driven by LSF, during object recognition (Bar et al., 2006). Regarding face processing, recent findings showing that informative LSF modulate the processing of HSF during passive viewing of faces (Petras et al., 2019, 2021), were also in accordance with this model. However, to our knowledge, there is no neuroimaging study investigating more specifically Bar's model, that is, the predictive role of LSF, during face processing. As visual stimuli such as faces are processed automatically, at the pre-attentive level (Palermo and Rhodes, 2010; Kovarski et al., 2019), the task would not implicate any instruction nor explicit recognition.
1.2. Visual Mismatch Negativity
Pre-attentive visual processing can be investigated with oddball paradigms, during which rare deviant stimuli are presented within a stream of frequent standard stimuli. With this type of paradigm, automatic change detection is measured with the Mismatch Negativity (MMN), a component that has been initially recorded in the auditory modality (Näätänen et al., 1978) but can also be elicited within the visual and somatosensory modalities. The visual MMN (vMMN) is a differential negative event-related potential (ERP) representing the pre-attentive neural mechanism involved in the automatic detection of unpredicted visual changes among a learned regularity (Czigler et al., 2002, 2006; Stefanics et al., 2015). In line with predictive coding framework (Rao and Ballard, 1999; Friston, 2005, 2010), vMMN is often considered as a neural correlate of prediction error (Friston, 2005; Garrido et al., 2009; Stefanics et al., 2014, but May, 2021; O'Reilly and O'Reilly, 2021) i.e., the difference between sensory input and predictions, based on our internal model constructed upon the regularity of standard stimuli. Thus, it appears to be particularly suitable to investigate predictive processes. vMMN is usually observed in a wide time-window between 100 and 500 ms depending on the studies, and it includes one (e.g., Tales et al., 1999) or two deflections (e.g., Heslenfeld, 2003; Czigler et al., 2006). While posterior activity is systematically observed (Kimura et al., 2010; Urakawa et al., 2010; Cléry et al., 2013), vMMN can also be found later in temporal regions (Heslenfeld, 2003; Kuldkepp et al., 2013). Additionally, a central positivity is sometimes observed (e.g., Czigler et al., 2006; Cleary et al., 2013; File et al., 2017). An fMRI study investigating the brain correlates of automatic visual change detection to shapes, found greater brain activation in response to deviant stimuli compared to standard stimuli in a wide network including the left posterior parietal, anterior pre-motor and superior occipital cortices, the left medial frontal, as well as the orbitofrontal gyri and the visual dorsal and ventral streams (Cléry et al., 2013). These results show involvement of both areas dedicated to visual perception and areas related to pre-attentional processing of change detection.
vMMN has been observed in a broad range of tasks, at different levels of visual processing. Thus, vMMN is elicited during change detection of color (Liu and Shi, 2008; Urakawa et al., 2010), line orientation (Yan et al., 2017), shape (Cléry et al., 2013), motion (Kuldkepp et al., 2013; Schmitt et al., 2018; Rowe et al., 2020), and spatial frequencies (Heslenfeld, 2003; Sulykos and Czigler, 2011; Cleary et al., 2013; Susac et al., 2014). However, so far, vMMN studies on spatial frequencies changes did not really investigate the contrast in response to deviant stimuli in HSF vs. in LSF. For instance, in Cleary et al. (2013), standards were always HSF gratings and deviants LSF gratings. Sulykos and Czigler (2011) used gratings but did not compare the response to HSF vs. LSF gratings as the authors were interested in the additive effect of two deviant features (orientation and spatial frequencies) and in the visual field. Heslenfeld (2003), however, compared deviance response to HSF vs. LSF gratings but did not find any interaction between deviance and spatial frequencies, whereas Susac et al. (2014) found opposite polarities for vMMN response to HSF compared to LSF, with opposite orientation of sources as well.
vMMN has also been observed for socially relevant changes such as facial emotion (Astikainen and Hietanen, 2009; Astikainen et al., 2013; Kreegipuu et al., 2013; Kovarski et al., 2017; Chen et al., 2020), gender (Kecskés-Kovács et al., 2013), attractiveness (Zhang et al., 2018), or identity (Rossion et al., 2020). However, vMMN in response to different spatial frequencies has never been studied with complex stimuli, such as scene, objects, or faces. Yet, investigating vMMN elicited by spatially filtered faces could help to further investigate which spatial frequency band is mainly involved in processing faces at a pre-attentive level, and more specifically the predictive role of LSF information.
1.3. Aim and Hypotheses
The aim of the current study was to determine to what extend LSF or HSF generate predictions in an intrinsically predictive task (i.e., an oddball task) involving automatic face processing. To do so, participants had to perform a concurrent task maintaining their attention toward the stimuli but allowing their implicit processing (Flynn et al., 2016; Kovarski et al., 2017; Male et al., 2020). This task should not favor global or local perception so that it would not influence the processing toward HSF or LSF. Here, we designed an oddball task involving a gray-scale unfiltered neutral face as a standard stimulus, the same face in color as target (as it involves both ventral and dorsal streams and thus will not orient the processing toward LSF or HSF; Claeys et al., 2004) and the same gray-scale face filtered in HSF or LSF as deviant stimuli.
Additionally, we used an equiprobable sequence as a control condition to deal with adaptation/refractoriness and differences in physical features (Grill-Spector et al., 2006; Li et al., 2012; Stefanics et al., 2014; Kovarski et al., 2017; Male et al., 2020). Indeed, contrary to typical vMMN paradigm which usually compare physically different stimuli (deviant vs. standard) leading to the impossibility to disentangle response to regularity violation from response related to the physical differences between stimuli, this control paradigm enables the comparison of identical stimuli (Garrido et al., 2009; Stefanics et al., 2014; Fitzgerald and Todd, 2020).
We capitalized on the fact that less predictable (more surprising deviants) would elicit more negative amplitude (see Stefanics et al., 2014), in accordance with the notion of prediction error signaling. Based on this account and on Bar's model (Bar et al., 2006), we hypothesized that HSF would elicit larger vMMN response than LSF as the latter are supposed to be at the root of predictive process in visual perception. In other words, prediction from LSF in the unfiltered stimulus would match the LSF deviant, but not the HSF deviant, eliciting a larger prediction error in the later case.
2. Materials and Methods
2.1. Participants
Thirty-four healthy adults (18 females; Mean age ± SD [range] = 29.4 ± 7.5 [19.5–46.0]) with no psychiatric or neurological disorder, participated in this study. Visual acuity was tested using the Landolt C task of the Freiburg Vision Test (FrACT3), version 3.10.5 (Bach, 1996). All participants had a logMAR <0.10. Participants gave their written informed consent after being provided with information on the study's objectives and procedures. The study was approved by the Ethics Committee (Comite de Protection des Personnes Ile de France 1—IRB/IORG: IORG0009918) under agreement number 2019-A01145-52. Participants received monetary compensation for their participation. EEG acquisitions were performed at IRMaGe neurophysiology facility (Grenoble, France).
2.2. Stimuli and Procedure
The procedure and stimuli were previously used and behaviorally validated (see Kovarski et al., 2017), but emotional deviants were replaced by spatially filtered deviants. Stimuli were photographs of two neutral faces of the same actress (Figure 1A) presented in an oddball and an equiprobable sequences (Figure 1B). In the oddball sequence, the standard stimulus was a grey-scale unfiltered face presented with a probability of occurrence of p = 0.80. The deviant stimuli were the same photograph either filtered in LSF (dLSF; p = 0.10) or filtered in HSF (dHSF; p = 0.10). LSF images contained only SF below 1.5 cycles per degree (cpd; 8.7 cycles per faces) and HSF images contained only SF above 6 cpd (34.2 cycles per faces). These cutoffs were chosen as SF preferentially used in face processing range from 4.5 to 37 cycles per faces (for a review see Jeantet et al., 2018) and authors who investigated SF in face processing previously used similar cutoffs (e.g., Goffaux and Rossion, 2006; Goffaux et al., 2011; Beffara et al., 2015). Filtered images were obtained by fast Fourier transform and by multiplying the Fourier energy with Gaussian filters. Images were normalized to obtain a mean luminance of 0.5 (for luminance values between 0 and 1) with a standard deviation of 0.075 (root mean square contrast). SF filtering and normalization were elaborated using MATLAB (Mathworks Inc., Sherborn, MA, USA). The target stimuli (p = 0.05 among standard stimuli) were also the same photograph but colored, so that it did not favor HSF or LSF processing. Saturation of the colored image was lowered to reduce the salience among stimuli and maintain attention. Then, color images were filtered based on luminance (L)-chrominance(Chr) decomposition (L = (R+G+B)/3 and chrominance Chr = [R−L, G−L, B−L]). Only the luminance L was filtered either low-pass or high-pass and the chrominance was added back to the filtered luminance with a multiplication factor of 3/5 to decrease its variance.
Figure 1. Stimuli and procedure. (A) The first line represent the gray-scale stimuli used in the oddball sequence (std, standard; dLSF, deviant Low Spatial Frequency; dHSF, deviant High Spatial Frequency) and in the equiprobable sequence (eBSF, equiprobable Broad Spatial Frequency; eLSF, equiprobable Low Spatial Frequency; eHSF, equiprobable High Spatial Frequency). The second line represents the colored target stimuli for the oddball sequence (first face) and for the equiprobable sequence (all faces). (B) Illustration of oddball and equiprobable sequences. (C) Task schematic of the oddball sequence.
The oddball sequence (Figure 1B) comprised 1,575 stimuli, presented in two sessions of 10 min each. In the equiprobable sequence (Figure 1B), the three stimuli of the oddball sequence (renamed eBSF—for Broad Spatial Frequencies/unfiltered stimuli—, eHSF and eLSF) as well as three additional stimuli (eBSF2, eLSF2, and eHSF2) of another neutral expression of the same actress (with the mouth slightly opened—Figure 1A) were presented pseudo-randomly, avoiding immediate repetition, at a probability of occurrence of ~0.16. In this sequence, target stimuli were the same stimuli but colored (p = 0.05; p ≈ 0.01 each). This sequence comprised 835 stimuli presented in one session of 10 min. Oddball and equiprobable sequences order was counterbalanced, as well as the two oddball sequences.
While participants sat comfortably in an armchair, stimuli were displayed centrally on a CRT screen (37 × 29.6 cm; refresh rate = 75 Hz; resolution = 1,280 × 1,024 pixels) at a viewing distance of 87 cm so that the faces corresponded to 5.8° of visual angle. Stimuli were presented using Presentation® software (Neurobehavioral Systems, Inc., Berkeley, CA, http://www.neurobs.com) for 150 ms with a 550 ms inter-stimulus interval (Figure 1C). Participants were instructed to look at the fixation cross and to press a button as quickly as possible when they saw a colored face. All subjects were monitored with a camera during the recording session.
2.3. Behavioral Data Collection and Analysis
Hit rate, false alarm, miss, and correct rejection of the target detection were recorded during the experiment. The sensitivity index d′= (z-score hit rate) − (z-score false alarm rate) was calculated with the psycho package (Makowski, 2018) on R version 4.0.3 (R Core Team, 2020) and R studio version 1.3.1075 (RStudio Team, 2019) to evaluate the involvement of the participants in the task.
2.4. EEG Data Collection and Analysis
2.4.1. EEG Recording
EEG data were recorded using a 96 active electrodes system (BrainAmp amplifiers and EasyCaps, Brain Products GmbH, Germany) following the 10-5 standard system. Electrooculographic (EOG) activity was recorded using two electrodes on the left and right outer canthi of the eyes and two above and below the left eye for spotting horizontal and vertical eye movements respectively (hEOG and vEOG). The ground electrode for the EOG was placed on the left base of the neck. Impedance were adjusted and kept below 25 kΩ before and during the recording. Signal was recorded with a sampling rate of 1,000 Hz, using an anti-aliasing filter at 500 Hz. FPz and FCz were defined as the ground and reference electrodes, respectively.
2.4.2. EEG Pre-processing
EEG pre-processing and analysis were performed using Brainstorm software (Tadel et al., 2011), and other custom scripts developed in MATLAB (The MathWorks Inc.). First, bad channels were visually inspected and removed during the recording and during the pre-processing for each participant, based on both temporal (deviants dynamics, flat signals) and frequency (deviant Welch's power spectrum density) characteristics. Time periods contaminated by high-frequency muscular artifacts were discarded manually. We then re-referenced the signal using average reference. Both horizontal and vertical eyes movements artifacts were targeted by analyzing the corresponding EOG recording, and corrected by applying a specific signal-space projection (SSP, a spatial decomposition method to be compared with independent component analysis, Uusitalo and Ilmoniemi, 1997). To do so, hEOG and vEOG signals were band-pass filtered between 1.5 and 20 Hz or 40 Hz, respectively, and then normalized using z-score. Any time period containing data above two standard deviations was considered as an artifact of ocular movements. SSP was then computed on the −200 to 200 ms time window relative to the artifact onset. The resulting SSP component relative to eyes movements was finally detected and rejected from the signal. The clean signal was band-pass filtered using cutoffs of 0.1 and 40 Hz. Time series of the rejected channels were interpolated using their neighboring channels. Finally, trials were epoched over a 700 ms analysis period, from 100 ms pre-stimulus to 600 ms post-stimulus. After pre-processing, a total of 0.2 % of the trials were discarded.
2.4.3. Event-Related Potentials
The first three trials of the sequence as well as trials occurring after target or deviant stimuli were excluded from ERP processing and analyses. Each ERP was computed by averaging all the trials of each stimulus of interest from the oddball sequence (standard, dHSF, dLSF) and from the equiprobable sequence (eBSF, eHSF, eLSF) and standardized using z-score against baseline (taken prior to stimulus onset from −100 to −1 ms) for each subject. Non-standardized and standardized ERP of each participant on each condition of interest were visually inspected. Again, any remaining deviant electrode was discarded and interpolated using its neighboring channels. In the end, a mean of five channels (Range = 0–18) on 96 were interpolated by participant. vMMNs for HSF and LSF were calculated as the arithmetic difference between ERPs to deviant and to equiprobable stimuli, taken from oddball and equiprobable sequences (dHSF-eHSF and dLSF-eLSF, respectively). Grand average difference waveforms were finally computed across participants.
2.4.4. Source Level Analysis
Source reconstruction was performed to estimate the anatomical location of electric sources that could explain the activities recorded on the scalp. It was performed with the sLoreta method (standardized low-resolution brain electromagnetic tomography) on the ICBM152 brain template using a volumic head model. The model was computed employing the symmetric boundary element method elaborated in the openMEEG freeware, using the default values for conductivity and layer thickness (Gramfort et al., 2010). For each participant, we calculated the noise covariance matrices from the concatenation of all the baseline periods (i.e., −100 to −1 ms before the onset of stimuli). Source activities were reconstructed on each of the 15,000 cortical vertices using sLoreta. Individual source maps were normalized against baseline (z-score) and averaged across subjects to obtain final group maps. They were used to show the potential sources of significant clusters, by averaging activities in the corresponding time windows.
2.5. Statistics
2.5.1. ERPs Analyses
In order to assess the sensory response to filtered and unfiltered equiprobable stimuli (eHSF, eLSF, and eBSF), we investigated ERP components. We extracted peak amplitude for each participants using MATLAB scripts (based on findpeaks function) and visual inspection over the latency range of 60–140 ms on O1 and O2 and on PO7 and PO8 for the P100. However, as a negative peak was observed for P100 in the HSF condition (Figure 2A), we performed P100 analysis on PO7 and PO8 only. For the N170, P100–N170 peak-to-peak difference was performed by measuring peaks in the latency ranges of 60–140 and 130–200 ms on PO7 and PO8. Then, data were analyzed with repeated measure analyses of variance (ANOVA) on RStudio Team (2019) using the afex package (Singmann et al., 2021) with Huynh-Feldt correction in case of departure of sphericity (tested with Mauchly tests). Analyses included SF (eBSF, eLSF, eHSF) and channels/hemisphere (PO7 vs. PO8) as within subject factors. Post-hoc tests were performed with the emmeans package (Lenth, 2021) by applying a Bonferroni correction. However, in case of strong violation of assumptions (normality and sphericity; which was the case for most analyses), we ran non-parametric tests, i.e., Friedman rank sum test for SF (with Durbin-Conover test for pairwise comparison, Holm corrected) and Wilcoxon signed-rank test for channels.
Figure 2. Sensory responses. (A) Grand average ERPs for each equiprobable (eBSF in purple, eLSF in green and eHSF in yellow) and deviant (dLSF in blue and dHSF in orange) condition over selected occipital (O1 and O2) and parieto-occipital (PO7 and PO8) electrodes. Dotted lines on O1 and O2 represent the latencies of P100 scalp topographies; dotted lines on PO7 and PO8 represent the latencies of N170 topographies (for the latest line) and of the latencies of the P100 used in statistical analyses (for the earliest line). (B) Scalp topographies showing activity at the peak used for P100 (on O1 and O2) and N170 (on PO7 and PO8).
2.5.2. Cluster Based Statistics
To investigate vMMN, cluster based permutation tests (using ft_timelock statistic, with “Monte-Carlo” and cluster as parameters) were used to assess differences between conditions (dHSF vs. eHSF, dLSF vs. eLSF, and dHSF-eHSF vs. dLSF-eLSF) regarding scalp EEG data. Samples were selected for clustering with a significance threshold α = 0.05 using dependent paired two-tailed t test over the 0–600 ms time window after stimulus onset on all electrodes. Significant samples were included in the clustering algorithm with the requirement of a minimum of two neighboring channels. Then, cluster-level statistics were calculated by summing the t values within each cluster and Monte-Carlo procedure (1,000 permutations) was used for correction. The significance threshold for clusters was set to pcluster < 0.05.
3. Results
3.1. Behavioral Results
D' values indicated a good compliance to the task (Mean d' = 4.52 ± 0.83). Nevertheless, three participants had a high miss rate (between 25 and 40%). Their recordings were visually inspected to ensure that they processed the visual stimuli. P100 were present in their recordings suggesting basic face processing and compliance to the task. Consequently, they were included in the analyses.
3.2. Event Related Potentials
3.2.1. P100
Figure 2A shows grand average ERPs (in μV) at O1 and O2 in each condition whereas Table 1 shows mean amplitude (in z-score) and latencies. While there is a large positive peak in LSF and BSF around 117 ms (P100), this is not observed in the HSF condition on occipital electrodes but on parieto-occipital channels. Instead, a negative peak is observed in the HSF condition occurring at 131 ms. Figure 2B shows differences in topographies in HSF compared to LSF and BSF that could explain the difference in ERPs. Whereas, there is large positivity over the occipital areas in LSF and BSF, it appears reduced in HSF, and does not involve the most posterior areas. We also observe frontal activation in LSF and BSF which is not observed in HSF.
Table 1. Mean amplitude (in z-score) and latencies (in milliseconds), with standard deviation (sd) underneath, for P100 and N170.
Analysis of P100 amplitudes on PO7 and PO8 revealed no significant effect of SF nor hemisphere.
Analyses of P100 latencies on PO7 and PO8 revealed a significant effect of SF on P100 latencies [, p = 0.008, W = 0.14], P100 latencies being shorter for eHSF than for eLSF (p = 0.006). The effect of hemisphere (O1 and O2) on P100 latency was not significant.
3.2.2. N170
Topographies of N170 shows parieto-occipital activity (in μV) in the three SF conditions (Figure 2B). Mean amplitudes (in z-score) as well as latencies are reported in Table 1. Visual inspection (on Figure 2A) of ERP on PO7 and PO8 confirms a large decrease of the amplitude following the P100, around 170 ms, corresponding to the N170, and which appears as more negative in the HSF condition.
Analyses on N170 peak-to-peak amplitudes (on PO7 and PO8) revealed an effect of SF [, p < 0.001, W = 0.21]. eHSF elicited a larger N170 than eBSF (p < 0.001) and than eLSF (p = 0.013), but there was no significant difference between eBSF and eLSF. The effect of hemisphere on N170 amplitude was not significant. Additionally, whereas there was no effect of SF on N170 latencies, the effect of hemisphere was significant [F(1, 33) = 5.51, p = 0.025, ] with the N170 appearing earlier on the right (PO8) than on the left hemisphere (PO7).
3.3. Cluster Based Statistics
3.3.1. Visual Inspection of Mismatch Response
Figure 3 represents the visual mismatch response in HSF and LSF conditions. Visual inspection of grand average mismatch response at centro-parietal (CPz) and lateral (P9) sites allows to identify two peaks around 183 and 407 ms, respectively, especially in the HSF condition. The peaks are negative over parietal areas, corresponding to the vMMN, and positive over centro-parietal areas. In the LSF condition, the mismatch response appears smaller and more sustained (i.e., the peaks are less identifiable). Scalp topographies of the mismatch response at the two peaks latencies show occipital and parieto-ocipital negativity (vMMN) as well as centro-parietal positivity, both more pronounced in the HSF than in the LSF condition.
Figure 3. Visual mismatch response in HSF and LSF conditions. (A) Cluster analyses showing statistical significance for each condition (HSF mismatch response on the left and LSF mismatch response on the right) over the entire scalp in the 0–600 ms latency range; grand average mismatch response (in μV) at CPz and P9 channels over 0–600 ms latency range (in orange for HSF and in blue for LSF) with scalp topographies at the two peaks (183 and 407 ms). (B) Average waveforms (in σ) over the significant cluster's channels in the 0–600 ms latency range. Significant temporal window are represented with a black line over each waveform. For HSF, deviant condition is in orange and equiprobable in yellow. For LSF condition, deviant is in blue and equiprobable in green. Scalp topographies at the peak activity of the cluster are represented beside the waveforms. Black dots indicate electrodes belonging to significant clusters. (C) Source activity (in σ) averaged over the corresponding cluster's time window with MNI coordinates.
3.3.2. HSF Mismatch Response
Visual observations were confirmed by statistical analyses showing two significant positive peaks over centro-parietal areas and two significant negative peaks over occipital areas in the HSF condition (Figure 3). More precisely, analysis revealed a first significant increased amplitude in dHSF relative to eHSF over centro-parietal areas from 143 to 226 ms (pcluster1 = 0.03). Source reconstruction indicated that the difference in activity was generated in the right fusiform area (BA37). There was a second significant increased amplitude in dHSF relative to eHSF over centro-parietal areas from 295 to 600 ms (pcluster2 = 0.002). Source reconstruction indicated that the difference in activity was related to a network including the right fusiform (BA37), the right anterior cingulate cortex (BA24) and the orbitofrontal cortex (BA11), passing by the right insula (BA13).
Additionally, we observed a significant decreased amplitude in dHSF relative to eHSF over occipital areas from 154 to 311 ms (pcluster1 = 0.02). Source reconstruction associated to this difference indicated generators in the left occipital areas (BA19; in the extrastriate cortex). A second decreased amplitude in dHSF relative to eHSF was observed over occipital and fronto-parietal areas from 320 to 499 ms (pcluster2 = 0.002) with generators of the mismatch located in the primary somotaosensory cortex (BA1).
3.3.3. LSF Mismatch Response
Visual observation of a more sustained activity in LSF was also confirmed by statistical analysis. Indeed, we found a significant increased amplitude in dLSF relative to eLSF over centro-parietal areas from 149 to 492 ms (pcluster = 0.002). The source of the mismatch was generated in the right anterior prefrontal cortex (BA10), but no activation was found in the fusiform for the LSF mismatch condition. Additionally, we did not find any cluster where dLSF amplitude was significantly inferior to eLSF.
3.3.4. Contrast Between HSF and LSF Conditions
Analyses on the contrast between the mismatch responses (represented on Figure 4) revealed that amplitude of dHSF-eHSF was significantly superior to amplitude of dLSF-eLSF over centro-parietal areas from 320 to 433 ms (pcluster = 0.004), i.e., for the second peak only. Source reconstruction indicated that this difference was associated with a larger positive activity in the right fusiform (BA37). Additionally, amplitude of dHSF-eHSF was inferior to dLSF-eLSF over left fronto-parietal areas from 359 to 437 ms (pcluster = 0.02), i.e., again, for the second peak only. Source reconstruction indicated that this difference was associated with a larger negative activity in the left middle occipital gyrus (BA39), in the visual association area (BA18) and in the frontal cortex (BA8).
Figure 4. Contrast between HSF and LSF mismatch responses. (A) Grand average mismatch response (in μV) at P9, P10, and Cpz elicited by HSF (orange) and LSF (blue) deviants compared to their equivalent in the equiprobable condition and scalp topographies at the two peaks of activity. (B) Cluster analyses showing statistical significance for the contrast between HSF vMMN (in orange) and LSF vMMN (in blue) over the entire scalp in the 0–600 ms latency range. (C) Average waveforms (in σ) over the significant cluster's channels in the 0–600 ms latency range. Significant temporal window are represented with a black line over each waveform. Scalp topographies at the peak activity of the cluster are represented beside the waveforms. Black dots indicate electrodes belonging to significant clusters. (D) Source reconstruction for the significant clusters with activity averaged over the corresponding time window with MNI coordinates.
4. Discussion
In the present study, we investigated the involvement of LSF and HSF in predictive processes during automatic face processing. We used a controlled vMMN paradigm with unfiltered faces as standard stimuli and LSF and HSF filtered faces as deviants. The results showed that the vMMN was larger for HSF faces than for LSF faces, revealing lower prediction error for LSF than for HSF. These results suggest a critical role of LSF in visual prediction during automatic face processing, in accordance with Bar's model (Bar et al., 2006). Our investigation of sensory response at the early stages of face processing is also in line with a coarse-to-fine processing and provides additional evidence on this subject (e.g., Halit et al., 2006; Hegdé, 2008; Vlamings et al., 2009; Goffaux et al., 2011; De Moraes et al., 2016; Petras et al., 2019, 2021).
4.1. The Predictive Role of LSF Supported by vMMN
Visual inspection of mismatch ERPs on P9 and CPz as well as topographies (Figure 3) revealed a biphasic response over the occipital and parieto-occipital areas, but also over centro-parietal areas. This was particularly marked in the HSF condition. The biphasic response in the LSF condition was less clear, although the difference in activity appeared more sustained as confirmed by cluster analyses.
In the HSF condition, a significantly more negative amplitude for deviant compared to equiprobable was found in two time windows over posterior areas. This corresponds to the vMMN and would reflect the prediction error elicited by HSF deviants when occurring in a stream of expected standard stimuli. This biphasic response is consistent with previous studies investigating face-related vMMN (Astikainen and Hietanen, 2009; Kimura et al., 2012; Li et al., 2012; Kovarski et al., 2017) but other studies found a more sustained activity, i.e., with less identifiable peaks (Kecskés-Kovács et al., 2013; Kreegipuu et al., 2013). The difference in activity might be related to the stimuli. Kovarski et al. (2017) showed that the two steps vMMN are elicited by neutral and emotional stimuli, but that only emotional stimuli implicated a sustained activity. More generally, the experiments by File et al. (2017) suggested that the pattern of vMMN response varies according to the type of stimulus and level of deviance. Source reconstruction revealed that the vMMN to HSF was associated with activity in the extrastriate cortex, which is highly consistent with previous findings on vMMN to face (Kimura et al., 2012; Kovarski et al., 2021) or to other visual stimuli (e.g., Kimura et al., 2010; Urakawa et al., 2010; Susac et al., 2014). This suggests that MMN is modality specific (vMMN being elicited in visual areas while auditory MMN is elicited in auditory cortex— Näätänen et al., 2007) and relatively low-level (Susac et al., 2014).
Additionally, in the HSF condition, a more positive amplitude to deviant compared to equiprobable was observed in two time windows in a large cluster of electrodes over centro-parietal areas. This positive activity elicited by deviants was found in other studies (Knight, 1997; Stefanics et al., 2012; Csukly et al., 2013; Kovarski et al., 2017) and is thought to reflect the involuntary attention caught by the deviant stimulus, namely the P3a, with an activity elicited around 300–500 ms (Knight, 1997). Sources of this mismatch were found in a wide range of brain regions, from the right fusiform to the prefrontal and cingulate anterior regions, including the insula. Generators in temporal and limbic lobes were also described in other studies (Kimura et al., 2012; Li et al., 2012; Kovarski et al., 2021), as well as frontal activation (Kimura et al., 2010, 2012). The fusiform activity is in line with the preferential processing of faces (Kimura et al., 2012; Stefanics et al., 2012) especially in the right hemisphere, consistently with previous results on face vMMN (Kimura et al., 2012; Kovarski et al., 2021). Note that positive prefrontal activation is elicited in the second time window whereas occipito-temporal activation is elicited from the first steps of vMMN. Thus, Kimura et al. (2012) suggested that occipito-temporal changes might be related to prediction error signaling while later frontal activation might underline the update of predictive models. However, this hypothesis remains to be tested and discussed according to hierarchical predictive coding model (Friston, 2005, 2010). This model suggests hierarchical loops where prediction errors run bottom-up and update predictions at higher level, while top-down processes reduce predictions error at lower level (Friston, 2005; Garrido et al., 2009; Stefanics et al., 2014).
In the LSF condition, there was no significant time window where amplitude of the deviant was more negative than the stimulus presented in the equiprobable sequence. Nevertheless, the amplitude of the deviant was more positive than equiprobable over centro-parietal areas in a large time window. This result shows that LSF deviants, despite looking more similar to BSF than HSF deviants, are automatically detected as being different from BSF standards. However, the fact that LSF deviants did not elicit a significantly more negative occipital activity compared to equiprobable stimuli (contrary to HSF deviants), might indicate that they did not lead to similar prediction errors as HSF. In other words, the conflict between bottom-up sensory input and top-down predictions might be reduced in the LSF deviant condition. Again, this is in line with our hypotheses and with Bar's model, emphasizing the role of LSF information in visual processing by triggering predictions which are then used by a top-down process to facilitate recognition (Bar et al., 2006). Interestingly, source analysis for vMMN in LSF showed generators in the right anterior prefrontal cortex only, whereas the generators are widespread in the HSF condition, including temporal areas. This corroborates a different mismatch response to LSF vs. HSF, as already suggested by Susac et al. (2014), who interpreted this difference by the use of different streams for LSF and HSF, which is also in line with Bar's model.
Cluster analysis of the difference between dHSF-eHSF vs. dLSF-eLSF showed a significant difference only in the later stages. dHSF-eHSF elicited larger positive response than dLSF-eLSF over centro-parietal areas. This difference was due to the activity of the right fusiform, playing a crucial role in face processing. Additionally, dHSF-eHSF was more negative than dLSF-eLSF over left fronto-parietal areas, with a difference related to the activity of occipital and frontal areas. Again, it suggests that change detection in faces might be driven by HSF, but more specifically at the later stages of the processing, in line with the coarse-to-fine model of visual perception. Thus, while LSF would be needed in the early stages so that predictions facilitate face processing, HSF would be more involved later, when a detailed processing of faces is required.
4.2. Early Sensory Response
Visual inspection of topographies and ERPs of early sensory responses revealed a different activity for the P100 for HSF compared to LSF and BSF. The former exhibited a bilateral activation in parieto-occipital areas (no significant difference in amplitude according to SF was found over these areas) while the latter stimuli elicited responses over occipital sites as well, with a large positive peak. In the HSF condition, grand average ERP rather showed a large negativity at ≈130 ms. Interestingly, this large posterior negative response to HSF has been previously observed in several studies, usually peaking between 70 and 115 ms, especially in response to gratings (e.g., Kenemans et al., 2000; Ellemberg et al., 2002; Heslenfeld, 2003; Boeschoten et al., 2005) and to checkerboard stimuli (Kenemans et al., 2000). It has been suggested that the negative peak for HSF would reflect the parvocellular activity while the positive peak for LSF would reflect the magnocellular activity (Ellemberg et al., 2002). Larger P100 amplitude for LSF relative to HSF was found for faces in a gender categorization task (Jeantet et al., 2019), in a passive viewing task (Obayashi et al., 2009), and in a task of valence categorization during a rapid serial visual presentation (Tian et al., 2018), all including adults. Nevertheless, results are contradictory in the literature. Craddock et al. (2013) found larger P100 amplitude for HSF compared to LSF in a task involving gender categorization but with a different filtering (i.e., attenuation of some frequency bands) and a smaller sample. Pourtois et al. (2005), in a gender categorization task, showed reduced P100 amplitude to filtered stimuli of fearful and neutral faces compared to unfiltered but no significant difference between HSF and LSF. Variation in methodology (e.g., type of filtering, type of stimuli, contrast variation) might be responsible for such inconsistencies.
Moreover, the current study showed an advantage of HSF compared to LSF and BSF faces concerning the latency of the P100. This effect is surprising regarding our theoretical framework but the literature on this topic is heterogeneous. Indeed, it has been shown either shorter P100 latencies for LSF than HSF (Vlamings et al., 2009; Peters and Kemner, 2017) or no difference (Jeantet et al., 2019, for parieto-occipital channels) or, similarly to our results, shorter latencies for HSF than LSF (Obayashi et al., 2009; Jeantet et al., 2019, for occipital channels). However, similar or shorter latencies for the P100 in HSF compared to LSF does not rule out Bar's model. Indeed, the rapid extraction of LSF to generate prediction does not exclude a parallel extraction of HSF, suggested by other studies (Rotshtein et al., 2007; De Gardelle and Kouider, 2010). Different pattern of extraction could be related to the conscious or unconscious perception of the stimulus (De Gardelle and Kouider, 2010), but it would need further investigation. We can also hypothesize that after extraction, LSF might be rapidly processed by the dorsal pathway (on the basis of myelinated magnocellular layers) while HSF might be processed more slowly by the ventral pathway (see Nowak and Bullier, 1997; Chen et al., 2007). It should be noted that not only parietal regions are involved in the processing of LSF during face perception as processors has also been found in several other regions such as the middle occipital gyrus (Rotshtein et al., 2007), the fusiform face area, the occipital face area, the ventral lateral occipital complex (Goffaux et al., 2011) or also subcortical areas (Vuilleumier et al., 2003; McFadyen et al., 2017). The differences in topographies might also partly explain the differences in latencies between HSF and LSF for the P100. Studies which investigated the sources of P100 indicated that HSF and LSF ERPs might be generated in different areas of the visual cortex (Kenemans et al., 2000; Boeschoten et al., 2005). Their results show that LSF ERPs would have a neural orientation predominantly perpendicular to the scalp surface (presumably extra-striate), suggesting generators in the medial calcarine cortex or in V2. The orientation for HSF would be more parallel to the scalp surface (presumably striate source), suggesting generators in the middle occipital gyri. Theses effects appear to be robust across the two type of stimuli (gratings and checkerboard). Hence, different sources according to SF could explain differences in P100 topographies as well as differences in latencies because the use of different pathways for HSF and LSF (Skottun, 2015) can lead to different patterns of activation.
The N170, the specific ERP response to faces, was observed over the parieto-occipital areas in the three conditions. Results of statistical analyses showed no difference between conditions in latencies, contrary to other studies which show either faster latencies for LSF in a passive viewing task involving fearful and neutral faces (Peters and Kemner, 2017) or slower latencies for LSF in a gender categorization task (Jeantet et al., 2019). However, peak-to peak amplitudes were greater in HSF than in LSF and BSF. This latest result corroborates other recent findings on passive viewing (Obayashi et al., 2009; Mares et al., 2018), categorization (Jeantet et al., 2019) or detection tasks (Tian et al., 2018) and are in line with the coarse-to-fine integration for visual recognition and with Bar's model. In this framework, HSF would be preferentially used at later stages of visual recognition, for converging to a single percept. Thus, the strong activity elicited by HSF on the N170 could emphasize the analysis of face details required for a precise categorization of a face. Nevertheless, results also differ from previous studies which found no effect of spatial frequencies (Holmes et al., 2005) or larger amplitude for LSF compared to HSF during face processing (Goffaux et al., 2003; Pourtois et al., 2005; Halit et al., 2006). Inconsistencies across studies regarding differences in P100 amplitude or latencies according to SF might be related to differences in methodology either in the task and stimuli used or in the SF filtering choices and need to be understood by further investigations.
In sum, P100 appears more sensitive to LSF, while later and more face-specific processing, reflected by the N170 component, appears more sensitive to HSF. This pattern could be in line with the coarse-to-fine hypothesis of visual perception and more specifically with Bar's model. LSF would enable a global parsing of visual information at the early stages of visual processing, favoring predictions by a top-down process, whereas fine details conveyed by HSF would be integrated later, at a face-specific stage (Bar et al., 2006; Goffaux et al., 2011; Jeantet et al., 2019). This is in accordance with the vMMN results as HSF deviants elicited increased activity than HSF equiprobable in the fusiform area between 143 and 226 ms. Additionally, the difference between HSF and LSF mismatch response also showed the specific involvement of HSF at later stages (around 300–400 ms) again with an enhanced fusiform activity. Hence, vMMN to face appears to be related to the processing of HSF in face areas at advanced stages of visual perception. Bar et al. (2006) hypothesized that predictions would be made in the orbitofrontal cortex (OFC) as LSF elicited a higher signal than HSF in these regions, particularly around 115 ms. Their analyses showed synchrony between occipital areas and OFC beginning around 80 ms and between OFC and fusiform gyrus around 130 ms. Interestingly, P100 topographies in the present study also suggest a different activity in response to LSF vs. HSF in frontal areas as scalp topographies showed activation of these areas around 117 ms for LSF and BSF, but not for HSF. This could also support Bar's model, even if further investigation regarding sources and connectivity would be needed.
4.3. Limitations and Perspectives
The study has some limitations. First, the stimuli used were neutral faces stimuli. Other experiments are necessary to investigate if results could extend to emotional faces, but also to other type of complex stimuli (e.g., object, scenes).
Second, we based filtering cutoffs on those used in the literature, i.e., <1.5 cpd for LSF and >6 cpd for HSF. Nonetheless, according to the difference found between HSF and LSF, it could be of interest to explore vMMN variations with a different filtering (in terms of cutoffs and in terms of the type of filters used) to enhance our understanding of the results. Indeed, spatial filtering method employed in the current study presents several limits (Perfetto et al., 2020). Removing some low-level information leads to less ecological stimuli. Additionally, the use of Gaussian filters implies that HSF filtered images contain some amount of LSF (Perfetto et al., 2020), which can be a pitfall as the distinct pathways might not be clearly distinguished. Nevertheless, this did not seem to affect our results as sensory response analysis showed that LSF as BSF lead to a clear P100, while HSF lead to a N100. Thus, negative activity elicited by HSF faces might reflect the activity of the parvocellular pathway and our results are in line with a clear differentiation of the two pathways (Ellemberg et al., 2002). However, another limit is that HSF and LSF face stimuli presented important perceptual differences, with LSF being visually more similar to BSF than HSF, which could have led to more salient deviancy (as the filtering leads to darker background un the HSF condition, Perfetto et al., 2020). It is also worth noting that equalizing the contrast has the advantage to reduce differences in spectral energy between HSF and LSF, but results in non-natural amplitude spectra (Petras et al., 2019). Even if bias related to differences in characteristics of the stimuli raised in this section are limited here thanks to the controlled paradigm using an equiprobable sequence, other stimuli manipulations, such as the normalization procedure developed by Petras et al. (2019), or the use of reverted images of the complementary SF channel (Pourtois et al., 2005), could be used in future studies to further explore these issues. Additionally, it could help to investigate if differences between HSF and LSF in early sensory responses (P100 and N170) could be due to differences in spatial frequency spectral power. Nonetheless, we have to keep in mind that each method have advantages and weaknesses, and no method enables to control for perceptual factors associated with HSF and LSF stimuli while keeping the stimuli identical to natural ones.
Third, we used only one stimulus duration but further studies should investigate longer durations, in particular to compare vMMN to LSF vs. HSF vMMN. For instance, while 150 ms would be sufficient to process LSF and HSF information at a pre-attentive level, it might be insufficient for encoding HSF in perceptual representation (Gao and Bentin, 2011). As forming prediction implies memory processes, the impact of presentation time on the results should be further investigated. This could be performed by increasing the presentation time to 500 ms (Gao and Bentin, 2011), but the viability of such long presentation times in an MMN paradigm should be explored first.
Finally, O'Reilly and O'Reilly (2021) argued that equiprobable sequence would be an insufficient control because of long-term adaptation. The authors added that counterbalanced blocks would confound adaptation effects rather than eliminate them (O'Reilly and Conway, 2021; O'Reilly and O'Reilly, 2021). Accordingly, actual MMN paradigm, at least in the auditory domain, would not properly enable inferences about deviance detection and predictive coding because of state changes that can affect sensory responses. Even if our results are in line with previous findings and fit well with the predictive coding framework of MMN as well as with the predictive brain hypothesis, we cannot totally rule out that differences in sensory processing (see “Sensory Processing Theory” in O'Reilly and O'Reilly, 2021) or that another MMN framework such as the adaptation model (e.g., May, 2021) might partly explain our results. Indeed, despite intensively studied, MMN still poses a number of challenges in terms of interpretation (May, 2021).
5. Conclusion
This study is the first to investigate vMMN to spatially filtered faces and contributes to better understand how HSF and LSF are involved in automatic face processing. Our results suggest a predictive role of LSF, in line with the predictive coding framework of perception (Rao and Ballard, 1999; Friston, 2005, 2010; Bar et al., 2006), followed by HSF involved in face-specific processing.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics Statement
The studies involving human participants were reviewed and approved by Comite de Protection des Personnes Ile de France 1—IRB/IORG:IORG0009918, under agreement number 2019-A01145-52. The patients/participants provided their written informed consent to participate in this study.
Author Contributions
AL, MG, MM, KK, DA, and FD contributed to the conception and design of the study. AL, SH, and LV collected the data and performed data analysis. AL, KK, SH, MG, and MM interpreted the data. AL, KK, SH, MG, MM, LV, and FD contributed to the manuscript. All authors contributed to the article and approved the submitted version.
Funding
This study was supported by the French Ministry of Higher Education, Research and Innovation (France) to AL. This work has been partially supported by MIAI@Grenoble Alpes, (ANR-19-P3IA-0003) to MM. Data were acquired on a platform of France Life Imaging Network partly funded by the grant ANR-11-INBS-0006.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Acknowledgments
We thank all participants for their implication in the study. We are grateful to Perrine Porte and Valentine Cuisinier for their precious help in data acquisition.
References
Astikainen, P., Cong, F., Ristaniemi, T., and Hietanen, J. (2013). Event-related potentials to unattended changes in facial expressions: detection of regularity violations or encoding of emotions? Front. Hum. Neurosci. 7, 557. doi: 10.3389/fnhum.2013.00557
Astikainen, P., and Hietanen, J. K. (2009). Event-related potentials to task-irrelevant changes in facial expressions. Behav. Brain Funct. 5, 30. doi: 10.1186/1744-9081-5-30
Bach, M. (1996). The Freiburg visual acuity test-automatic measurement of visual acuity. Optomet. Vis. Sci. 73, 49–53. doi: 10.1097/00006324-199601000-00008
Bachmann, T. (2016). Perception of Pixelated Images. San Diego, CA: Academic PRESS. doi: 10.1016/B978-0-12-809311-5.00003-9
Bar, M. (2007). The proactive brain: using analogies and associations to generate predictions. Trends Cogn. Sci. 11, 280–289. doi: 10.1016/j.tics.2007.05.005
Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., et al. (2006). Top-down facilitation of visual recognition. Proc. Natl. Acad. Sci. U.S.A. 103, 449–454. doi: 10.1073/pnas.0507062103
Beffara, B., Wicker, B., Vermeulen, N., Ouellet, M., Bret, A., Molina, M. J. F., et al. (2015). Reduction of interference effect by low spatial frequency information priming in an emotional Stroop task. J. Vis. 15, 16. doi: 10.1167/15.6.16
Boeschoten, M. A., Kemner, C., Kenemans, J. L., and van Engeland, H. (2005). Time-varying differences in evoked potentials elicited by high versus low spatial frequencies: a topographical and source analysis. Clin. Neurophysiol. 116, 1956–1966. doi: 10.1016/j.clinph.2005.03.021
Cassidy, B., Wiley, R., Sim, M., and Hugenberg, K. (2021). Spatial frequency and valence interact in complex emotion perception. Cogn. Emot. 35, 1618–1625. doi: 10.1080/02699931.2021.1979474
Chen, B., Sun, P., and Fu, S. (2020). Consciousness modulates the automatic change detection of masked emotional faces: Evidence from visual mismatch negativity. Neuropsychologia 144:107459. doi: 10.1016/j.neuropsychologia.2020.107459
Chen, C.-M., Lakatos, P., Shah, A. S., Mehta, A. D., Givre, S. J., Javitt, D. C., et al. (2007). Functional anatomy and interaction of fast and slow visual pathways in macaque monkeys. Cereb. Cortex 17, 1561–1569. doi: 10.1093/cercor/bhl067
Claeys, K. G., Dupont, P., Cornette, L., Sunaert, S., Van Hecke, P., De Schutter, E., et al. (2004). Color discrimination involves ventral and dorsal stream visual areas. Cereb. Cortex 14, 803–822. doi: 10.1093/cercor/bhh040
Cleary, K., Donkers, F., Evans, A., and Belger, A. (2013). Investigating developmental changes in sensory processing: visual mismatch response in healthy children. Front. Hum. Neurosci. 7, 922. doi: 10.3389/fnhum.2013.00922
Cléry, H., Bonnet-Brilhault, F., Lenoir, P., Barthelemy, C., Bruneau, N., and Gomot, M. (2013). Atypical visual change processing in children with autism: an electrophysiological study. Psychophysiology 50, 240–252. doi: 10.1111/psyp.12006
Cléry, H., Roux, S., Houy-Durand, E., Bonnet-Brilhault, F., Bruneau, N., and Gomot, M. (2013). Electrophysiological evidence of atypical visual change detection in adults with autism. Front. Hum. Neurosci. 7, 62. doi: 10.3389/fnhum.2013.00062
Craddock, M., Martinovic, J., and Müller, M. M. (2013). Task and spatial frequency modulations of object processing: an EEG study. PLoS ONE 8, e70293. doi: 10.1371/journal.pone.0070293
Csukly, G., Stefanics, G., Komlósi, S., Czigler, I., and Czobor, P. (2013). Emotion-related visual mismatch responses in schizophrenia: impairments and correlations with emotion recognition. PLoS ONE 8, e75444. doi: 10.1371/journal.pone.0075444
Czigler, I., Balázs, L., and Winkler, I. (2002). Memory-based detection of task-irrelevant visual changes. Psychophysiology 39, 869–873. doi: 10.1111/1469-8986.3960869
Czigler, I., Weisz, J., and Winkler, I. (2006). ERPs and deviance detection: visual mismatch negativity to repeated visual stimuli. Neurosci. Lett. 401, 178–182. doi: 10.1016/j.neulet.2006.03.018
De Gardelle, V., and Kouider, S. (2010). How spatial frequencies and visual awareness interact during face processing. Psychol. Sci. 21, 58–66. doi: 10.1177/0956797609354064
De Moraes, R., Kauffmann, L., Fukusima, S., and Faubert, J. (2016). Behavioral evidence for a predominant and nonlateralized coarse-to-fine encoding for face categorization. Psychol. Neurosci. 9, 399–410. doi: 10.1037/pne0000065
Deruelle, C., Rondan, C., Gepner, B., and Tardif, C. (2004). Spatial frequency and face processing in children with autism and asperger syndrome. J. Autism Dev. Disord. 34, 199–210. doi: 10.1023/B:JADD.0000022610.09668.4c
Dube, B., Arnell, K., and Mondloch, C. (2014). Does attention to low spatial frequencies enhance face recognition? An individual differences approach. J. Vis. 14, 544–544. doi: 10.1167/14.10.544
Ellemberg, D., Hammarrenger, B., Lepore, F., Roy, M.-S., and Guillemot, J.-P. (2002). Contrast dependency of VEPs as a function of spatial frequency: the parvocellular and magnocellular contributions to human VEPs. Spat. Vis. 15, 99–111. doi: 10.1163/15685680152692042
File, D., File, B., Bodnár, F., Sulykos, I., Kecskés-Kovács, K., and Czigler, I. (2017). Visual mismatch negativity (VMMn) for low-and high-level deviances: a control study. Attent. Percept. Psychophys. 79, 2153–2170. doi: 10.3758/s13414-017-1373-y
Fitzgerald, K., and Todd, J. (2020). Making sense of mismatch negativity. Front. Psychiatry 11, 468. doi: 10.3389/fpsyt.2020.00468
Flynn, M., Liasis, A., Gardner, M., and Towell, T. (2016). Visual mismatch negativity to masked stimuli presented at very brief presentation rates. Exp. Brain Res. 235, 555–563. doi: 10.1007/s00221-016-4807-1
Friston, K. (2005). A theory of cortical responses. Philos. Trans. R. Soc. B Biol. Sci. 360, 815-836. doi: 10.1098/rstb.2005.1622
Friston, K. (2010). The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138. doi: 10.1038/nrn2787
Gao, Z., and Bentin, S. (2011). Coarse-to-fine encoding of spatial frequency information into visual short-term memory for faces but impartial decay. J. Exp. Psychol. 37, 1051. doi: 10.1037/a0023091
Garrido, M. I., Kilner, J. M., Stephan, K. E., and Friston, K. J. (2009). The mismatch negativity: a review of underlying mechanisms. Clin. Neurophysiol. 120, 453–463. doi: 10.1016/j.clinph.2008.11.029
Goffaux, V., Gauthier, I., and Rossion, B. (2003). Spatial scale contribution to early visual differences between face and object processing. Cogn. Brain Res. 16, 416–424. doi: 10.1016/S0926-6410(03)00056-9
Goffaux, V., Peters, J., Haubrechts, J., Schiltz, C., Jansma, B., and Goebel, R. (2011). From coarse to fine? Spatial and temporal dynamics of cortical face processing. Cereb. Cortex 21, 467–476. doi: 10.1093/cercor/bhq112
Goffaux, V., and Rossion, B. (2006). Faces are ”spatial”–holistic face perception is supported by low spatial frequencies. J. Exp. Psychol. 32, 1023–1039. doi: 10.1037/0096-1523.32.4.1023
Gramfort, A., Papadopoulo, T., Olivi, E., and Clerc, M. (2010). Openmeeg: opensource software for quasistatic bioelectromagnetics. Biomed. Eng. Online 9, 1–20. doi: 10.1186/1475-925X-9-45
Grill-Spector, K., Henson, R., and Martin, A. (2006). Repetition and the brain: Neural models of stimulus-specific effects. Trends Cogn. Sci. 10, 14–23. doi: 10.1016/j.tics.2005.11.006
Halit, H., de Haan, M., Schyns, P. G., and Johnson, M. H. (2006). Is high-spatial frequency information used in the early stages of face detection? Brain Res. 1117, 154–161. doi: 10.1016/j.brainres.2006.07.059
Hegdé, J. (2008). Time course of visual perception: coarse-to-fine processing and beyond. Prog. Neurobiol. 84, 405–439. doi: 10.1016/j.pneurobio.2007.09.001
Heslenfeld, D. J. (2003). “Visual mismatch negativity,” in Detection of Change: Event-Related Potential and fMRI Findings, ed J. Polich (Boston, MA: Springer US), 41–59. doi: 10.1007/978-1-4615-0294-4_3
Holmes, A., Green, S., and Vuilleumier, P. (2005). The involvement of distinct visual channels in rapid attention towards fearful facial expressions. Cogn. Emot. 19, 899–922. doi: 10.1080/02699930441000454
Jeantet, C., Caharel, S., Schwan, R., Lighezzolo-Alnot, J., and Laprevote, V. (2018). Factors influencing spatial frequency extraction in faces: a review. Neurosci. Biobehav. Rev. 93, 123–138. doi: 10.1016/j.neubiorev.2018.03.006
Jeantet, C., Laprevote, V., Schwan, R., Schwitzer, T., Maillard, L., Lighezzolo-Alnot, J., et al. (2019). Time course of spatial frequency integration in face perception: an ERP study. Int. J. Psychophysiol. 143, 105. doi: 10.1016/j.ijpsycho.2019.07.001
Jennings, B. J., Yu, Y., and Kingdom, F. A. A. (2017). The role of spatial frequency in emotional face classification. Attent. Percept. Psychophys. 79, 1573–1577. doi: 10.3758/s13414-017-1377-7
Kecskés-Kovács, K., Sulykos, I., and Czigler, I. (2013). Is it a face of a woman or a man? Visual mismatch negativity is sensitive to gender category. Front. Hum. Neurosci. 7, 532. doi: 10.3389/fnhum.2013.00532
Kenemans, J. L., Baas, J. M. P., Mangun, G. R., Lijffijt, M., and Verbaten, M. N. (2000). On the processing of spatial frequencies as revealed by evoked-potential source modeling. Clin. Neurophysiol. 111, 1113–1123. doi: 10.1016/S1388-2457(00)00270-4
Kimura, M., Kondo, H., Ohira, H., and Schröger, E. (2012). Unintentional temporal context-based prediction of emotional faces: an electrophysiological study. Cereb. Cortex 22, 1774–1785. doi: 10.1093/cercor/bhr244
Kimura, M., Ohira, H., and Schröger, E. (2010). Localizing sensory and cognitive systems for pre-attentive visual deviance detection: an sLORETA analysis of the data of Kimura et al. (2009). Neurosci. Lett. 485, 198–203. doi: 10.1016/j.neulet.2010.09.011
Knight, R. T. (1997). Distributed cortical network for visual attention. J. Cogn. Neurosci. 9, 75–91. doi: 10.1162/jocn.1997.9.1.75
Kovarski, K., Batty, M., and Taylor, M. J. (2019). “Visual responses to implicit emotional faces,” in Encyclopedia of Autism Spectrum Disorders, ed F. R. Volkmar (New York, NY: Springer), 1–3. doi: 10.1007/978-1-4614-6435-8_102334-1
Kovarski, K., Charpentier, J., Roux, S., Batty, M., Houy-Durand, E., and Gomot, M. (2021). Emotional visual mismatch negativity: a joint investigation of social and non-social dimensions in adults with autism. Transl. Psychiatry 11, 10. doi: 10.1038/s41398-020-01133-5
Kovarski, K., Latinus, M., Charpentier, J., Cléry, H., Roux, S., Houy-Durand, E., et al. (2017). Facial expression related vMMN: disentangling emotional from neutral change detection. Front. Hum. Neurosci. 11, 18. doi: 10.3389/fnhum.2017.00018
Kreegipuu, K., Kuldkepp, N., Sibolt, O., Toom, M., Allik, J., and Näätänen, R. (2013). vMMN for schematic faces: automatic detection of change in emotional expression. Front. Hum. Neurosci. 7, 714. doi: 10.3389/fnhum.2013.00714
Kuldkepp, N., Kreegipuu, K., Raidvee, A., Näätänen, R., and Allik, J. (2013). Unattended and attended visual change detection of motion as indexed by event-related potentials and its behavioral correlates. Front. Hum. Neurosci. 7, 476. doi: 10.3389/fnhum.2013.00476
Kumar, D., and Srinivasan, N. (2011). Emotion perception is mediated by spatial frequency content. Emotion 11, 1144–1151. doi: 10.1037/a0025453
Kveraga, K., Boshyan, J., and Bar, M. (2007). Magnocellular projections as the trigger of top-down facilitation in recognition. J. Neurosci. 27, 13232–13240. doi: 10.1523/JNEUROSCI.3481-07.2007
Lacroix, A., Nalborczyk, L., Dutheil, F., Kovarski, K., Chokron, S., Garrido, M., et al. (2021). High spatial frequency filtered primes hastens happy faces categorization in autistic adults. Brain Cogn. 155, 105811. doi: 10.1016/j.bandc.2021.105811
Langner, O., Becker, E. S., Rinck, M., and van Knippenberg, A. (2015). Socially anxious individuals discriminate better between angry and neutral faces, particularly when using low spatial frequency information. J. Behav. Ther. Exp. Psychiatry 46, 44–49. doi: 10.1016/j.jbtep.2014.06.008
Lenth, R. V. (2021). Emmeans: Estimated Marginal Means, aka Least-Squares Means. R package version 1.5.4. Available online at: https://CRAN.R-project.org/package=emmeans
Li, X., Lu, Y., Sun, G., Gao, L., and Zhao, L. (2012). Visual mismatch negativity elicited by facial expressions: new evidence from the equiprobable paradigm. Behav. Brain Funct. 8, 7. doi: 10.1186/1744-9081-8-7
Liu, T., and Shi, J. (2008). Event-related potentials during preattentional processing of color stimuli. NeuroReport 19, 1221–1225. doi: 10.1097/WNR.0b013e328309a0dd
Makowski, D. (2018). The psycho package: an efficient and publishing-oriented workflow for psychological science. J. Open Source Softw. 3, 470. doi: 10.21105/joss.00470
Male, A. G., O'Shea, R. P., Schröger, E., Müller, D., Roeber, U., and Widmann, A. (2020). The quest for the genuine visual mismatch negativity (VMMn): Event-related potential indications of deviance detection for low-level visual features. Psychophysiology 57, e13576. doi: 10.1111/psyp.13576
Mares, I., Smith, M., Johnson, M., and Senju, A. (2018). Revealing the neural time-course of direct gaze processing via spatial frequency manipulation of faces. Biol. Psychol. 135, 76–83. doi: 10.1016/j.biopsycho.2018.03.001
May, P. J. (2021). The adaptation model offers a challenge for the predictive coding account of mismatch negativity. Front. Hum. Neurosci. 15, 721574. doi: 10.3389/fnhum.2021.721574
McFadyen, J., Mermillod, M., Mattingley, J. B., Halász, V., and Garrido, M. I. (2017). A rapid subcortical amygdala route for faces irrespective of spatial frequency and emotion. J. Neurosci. 37, 3864–3874. doi: 10.1523/JNEUROSCI.3525-16.2017
Mermillod, M., Bonin, P., Mondillon, L., Alleysson, D., and Vermeulen, N. (2010). Coarse scales are sufficient for efficient categorization of emotional facial expressions: evidence from neural computation. Neurocomputing 73, 2522–2531. doi: 10.1016/j.neucom.2010.06.002
Mermillod, M., Guyader, N., Vuilleumier, P., Alleysson, D., and Marendaz, C. (2005). “How diagnostic are spatial frequencies for fear recognition?,” in 4th Annual Summer Interdisciplinary Conference (ASIC 2005) (Briançon).
Morrison, D. J., and Schyns, P. G. (2001). Usage of spatial scales for the categorization of faces, objects, and scenes. Psychon. Bull. Rev. 8, 454–469. doi: 10.3758/BF03196180
Näätänen, R., Gaillard, A. W., and Mäntysalo, S. (1978). Early selective-attention effect on evoked potential reinterpreted. Acta Psychol. 42, 313–329. doi: 10.1016/0001-6918(78)90006-9
Näätänen, R., Paavilainen, P., Rinne, T., and Alho, K. (2007). The mismatch negativity (MMn) in basic research of central auditory processing: a review. Clin. Neurophysiol. 118, 2544–2590. doi: 10.1016/j.clinph.2007.04.026
Nakashima, T., Kaneko, K., Goto, Y., Abe, T., Mitsudo, T., Ogata, K., et al. (2008). Early ERP components differentially extract facial features: evidence for spatial frequency-and-contrast detectors. Neurosci. Res. 62, 225–235. doi: 10.1016/j.neures.2008.08.009
Nowak, L. G., and Bullier, J. (1997). “The timing of information transfer in the visual system,” in Extrastriate Cortex in Primates (Boston, MA: Springer), 205–241. doi: 10.1007/978-1-4757-9625-4_5
Obayashi, C., Nakashima, T., Onitsuka, T., Maekawa, T., Hirano, Y., Hirano, S., et al. (2009). Decreased spatial frequency sensitivities for processing faces in male patients with chronic schizophrenia. Clin. Neurophysiol. 120, 1525–1533. doi: 10.1016/j.clinph.2009.06.016
O'Reilly, J. A., and Conway, B. A. (2021). Classical and controlled auditory mismatch responses to multiple physical deviances in anaesthetised and conscious mice. Eur. J. Neurosci. 53, 1839–1854. doi: 10.1111/ejn.15072
O'Reilly, J. A., and O'Reilly, A. (2021). A critical review of the deviance detection theory of mismatch negativity. NeuroScience 2, 151–165. doi: 10.3390/neurosci2020011
Palermo, R., and Rhodes, G. (2010). “Is face processing automatic?,” in Tutorials in Visual Cognition, ed V. Coltheart (Psychology Press), 305–336.
Park, G., Van Bavel, J. J., Vasey, M. W., Egan, E. J., and Thayer, J. F. (2012). From the heart to the mind's eye: cardiac vagal tone is related to visual perception of fearful faces at high spatial frequency. Biol. Psychol. 90, 171–178. doi: 10.1016/j.biopsycho.2012.02.012
Perfetto, S., Wilder, J., and Walther, D. B. (2020). Effects of spatial frequency filtering choices on the perception of filtered images. Vision 4, 29. doi: 10.3390/vision4020029
Peters, J. C., and Kemner, C. (2017). Facial expressions perceived by the adolescent brain: towards the proficient use of low spatial frequency information. Biol. Psychol. 129, 1–7. doi: 10.1016/j.biopsycho.2017.07.022
Petras, K., Ten Oever, S., Dalal, S. S., and Goffaux, V. (2021). Information redundancy across spatial scales modulates early visual cortical processing. NeuroImage 244, 118613. doi: 10.1016/j.neuroimage.2021.118613
Petras, K., Ten Oever, S., Jacobs, C., and Goffaux, V. (2019). Coarse-to-fine information integration in human vision. NeuroImage. 186, 103–112. doi: 10.1016/j.neuroimage.2018.10.086
Pourtois, G., Dan, E. S., Grandjean, D., Sander, D., and Vuilleumier, P. (2005). Enhanced extrastriate visual response to bandpass spatial frequency filtered fearful faces: time course and topographic evoked-potentials mapping. Hum. Brain Mapp. 26, 65–79. doi: 10.1002/hbm.20130
R Core Team (2020). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.
Rao, R. P. N., and Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87. doi: 10.1038/4580
Rossion, B., Retter, T. L., and Liu-Shuang, J. (2020). Understanding human individuation of unfamiliar faces with oddball fast periodic visual stimulation and electroencephalography. Eur. J. Neurosci. 52, 4283–4344. doi: 10.1111/ejn.14865
Rotshtein, P., Vuilleumier, P., Winston, J., Driver, J., and Dolan, R. (2007). Distinct and convergent visual processing of high and low spatial frequency information in faces. Cereb. Cortex 17, 2713–2724. doi: 10.1093/cercor/bhl180
Rowe, E. G., Tsuchiya, N., and Garrido, M. I. (2020). Detecting (un)seen change: the neural underpinnings of (un)conscious prediction errors. Front. Syst. Neurosci. 14, 541670. doi: 10.3389/fnsys.2020.541670
Ruiz-Soler, M, and Beltran, F. S. (2006). Face perception: an integrative review of the role of spatial frequencies. Psychol. Res. 70, 273–292. doi: 10.1007/s00426-005-0215-z
Schmitt, C., Klingenhoefer, S., and Bremmer, F. (2018). Preattentive and predictive processing of visual motion. Sci. Rep. 8, 12399. doi: 10.1038/s41598-018-30832-9
Schyns, P. G., and Oliva, A. (1999). Dr. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition 69, 243–265. doi: 10.1016/S0010-0277(98)00069-9
Shankland, R., Favre, P., Kotsou, I., and Mermillod, M. (2021). Mindfulness and de-automatization: effect of mindfulness-based interventions on emotional facial expressions processing. Mindfulness 12, 226–239. doi: 10.1007/s12671-020-01515-2
Singmann, H., Bolker, B., Westfall, J., Aust, F., and Ben-Shachar, M. S. (2021). Afex: analysis of factorial experiments.
Skottun, B. C. (2015). On the use of spatial frequency to isolate contributions from the magnocellular and parvocellular systems and the dorsal and ventral cortical streams. Neurosci. Biobehav. Rev. 56, 266–275. doi: 10.1016/j.neubiorev.2015.07.002
Smith, M. L., and Merlusca, C. (2014). How task shapes the use of information during facial expression categorizations. Emotion 14, 478–487. doi: 10.1037/a0035588
Stefanics, G., Astikainen, P., and Czigler, I. (2015). Visual mismatch negativity (vMMN): a prediction error signal in the visual modality. Front. Hum. Neurosci. 8, 1074. doi: 10.3389/fnhum.2014.01074
Stefanics, G., Csukly, G., Komlósi, S., Czobor, P., and Czigler, I. (2012). Processing of unattended facial emotions: a visual mismatch negativity study. NeuroImage 59, 3042–3049. doi: 10.1016/j.neuroimage.2011.10.041
Stefanics, G., Kremláček, J., and Czigler, I. (2014). Visual mismatch negativity: a predictive coding view. Front. Hum. Neurosci. 8, 666. doi: 10.3389/fnhum.2014.00666
Sulykos, I., and Czigler, I. (2011). One plus one is less than two: visual features elicit non-additive mismatch-related brain activity. Brain Res. 1398, 64–71. doi: 10.1016/j.brainres.2011.05.009
Susac, A., Heslenfeld, D. J., Huonker, R., and Supek, S. (2014). Magnetic source localization of early visual mismatch response. Brain Topogr. 27, 648–651. doi: 10.1007/s10548-013-0340-8
Tadel, F., Baillet, S., Mosher, J. C., Pantazis, D., and Leahy, R. M. (2011). Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Available online at: https://www.hindawi.com/journals/cin/2011/879716/
Tales, A., Newton, P., Troscianko, T., and Butler, S. (1999). Mismatch negativity in the visual modality. NeuroReport 10, 3363–3367. doi: 10.1097/00001756-199911080-00020
Tian, J., Wang, J., Xia, T., Zhao, W., Xu, Q., and He, W. (2018). The influence of spatial frequency content on facial expression processing: an ERP study using rapid serial visual presentation. Sci. Rep. 8, 1–8. doi: 10.1038/s41598-018-20467-1
Urakawa, T., Inui, K., Yamashiro, K., and Kakigi, R. (2010). Cortical dynamics of the visual change detection process. Psychophysiology 47, 905–912. doi: 10.1111/j.1469-8986.2010.00987.x
Uusitalo, M. A., and Ilmoniemi, R. J. (1997). Signal-space projection method for separating MEG or EEG into components. Med. Biol. Eng. Comput. 35, 135–140. doi: 10.1007/BF02534144
Vlamings, P. H., Goffaux, V., and Kemner, C. (2009). Is the early modulation of brain activity by fearful facial expressions primarily mediated by coarse low spatial frequency information? J. Vis. 9, 12. doi: 10.1167/9.5.12
Vuilleumier, P., Armony, J. L., Driver, J., and Dolan, R. J. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 6, 624–631. doi: 10.1038/nn1057
Wang, S., Eccleston, C., and Keogh, E. (2015). The role of spatial frequency information in the recognition of facial expressions of pain. Pain 156, 1670–1682. doi: 10.1097/j.pain.0000000000000226
Wang, S., Eccleston, C., and Keogh, E. (2021). The time course of facial expression recognition using spatial frequency information: comparing pain and core emotions. J. Pain 22, 196–208. doi: 10.1016/j.jpain.2020.07.004
Yan, T., Feng, Y., Liu, T., Wang, L., Mu, N., Dong, X., et al. (2017). Theta oscillations related to orientation recognition in unattended condition: a vMMN study. Front. Behav. Neurosci. 11, 166. doi: 10.3389/fnbeh.2017.00166
Zhang, J., Shan, S., Kan, M., and Chen, X. (2014). “Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment,” in European Conference on Computer Vision (Zurich: Springer), 1–16. doi: 10.1007/978-3-319-10605-2_1
Zhang, S., Wang, H., and Guo, Q. (2018). Sex and physiological cycles affect the automatic perception of attractive opposite-sex faces: a visual mismatch negativity study. Evol. Psychol. 16, 1474704918812140. doi: 10.1177/1474704918812140
Keywords: vMMN, spatial frequencies, face processing, predictive coding, prediction error, automatic visual processing
Citation: Lacroix A, Harquel S, Mermillod M, Vercueil L, Alleysson D, Dutheil F, Kovarski K and Gomot M (2022) The Predictive Role of Low Spatial Frequencies in Automatic Face Processing: A Visual Mismatch Negativity Investigation. Front. Hum. Neurosci. 16:838454. doi: 10.3389/fnhum.2022.838454
Received: 17 December 2021; Accepted: 11 February 2022;
Published: 11 March 2022.
Edited by:
Kairi Kreegipuu, University of Tartu, EstoniaReviewed by:
Talis Bachmann, University of Tartu, EstoniaChenyi Chen, Taipei Medical University, Taiwan
Copyright © 2022 Lacroix, Harquel, Mermillod, Vercueil, Alleysson, Dutheil, Kovarski and Gomot. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Adeline Lacroix, adeline.lacroix@univ-grenoble-alpes.fr