- 1Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
- 2University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
Pitch labeling in absolute pitch (AP), the ability to recognize the pitch class of a sound without an external reference, is effortless, fast, and presumably automatic. Previous studies have shown that pitch labeling in AP can interfere with task demands. In the current study, we used a cued auditory Go/Nogo task requiring same/different decisions to investigate both behavioral and electrophysiological correlates of increased inhibitory demands related to automatic pitch labeling. The task comprised two Nogo conditions: a Nogo condition with pitch differences larger than one semitone, and a second Nogo condition with pitch differences of only a quarter semitone. The first Nogo condition tested if auditory-related inhibition processes are generally altered in AP musicians. The second Nogo condition tested the suppressibility of the pitch labeling using a Stroop-like effect: the two tones belonged to the same pitch class but were not identical in terms of tone frequency. If pitch labeling cannot be suppressed, the conflicting information would be expected to increase the inhibitory load in AP musicians. Our data provided no evidence for an increased difficulty to inhibit a prepotent response or to suppress conflicting pitch-labeling information in AP: AP musicians showed similar commission error rates as non-AP musicians in both Nogo conditions. N2d and P3d amplitudes of AP musicians were also comparable to those of non-AP musicians. The event-related potentials (ERPs) were, however, modulated by the Nogo condition, probably indicating an effect of stimulus similarity. It is possible that, depending on the context, pitch labeling in AP musicians is not entirely automatic and can be suppressed.
Introduction
Pitch is one of the main perceptual properties of musical tones. Most people perceive pitch not in absolute but rather in relative terms, i.e., they register whether a pitch is higher or lower compared to a previous pitch. Professional musicians are further trained to determine the exact amount of the relative difference between two pitches in terms of musical intervals. Using this so-called relative-pitch (RP) ability, most musicians can reconstruct the pitch of a tone when presented with a reference tone. Only about 0.01% of the general population (Bachem, 1955; Profita and Bidder, 1988; Takeuchi and Hulse, 1993) and about 4–15% of musicians (Baharloo et al., 1998; Gregersen et al., 1999, 2001; Leite et al., 2016) possess the unique ability to recognize the pitch class of a tone or to produce a specific pitch without the aid of a reference tone. This ability is referred to as absolute pitch (AP; Deutsch, 2013).
Pitch identification in AP is fast and effortless (Miyazaki, 1990; Deutsch, 2013), and is even presumed to be automatic (Levitin and Rogers, 2005). The extent of this automaticity has been studied primarily using auditory Stroop tasks (Miyazaki, 2004; Itoh et al., 2005; Hsieh and Saberi, 2008; Akiva-Kabiri and Henik, 2012; Schulze et al., 2013). Originally, the Stroop effect (Stroop, 1935) describes the phenomenon that naming the ink color of a semantically incongruent color word (e.g., the word “RED” printed in the color blue) is slower than naming the ink color of solid-color squares. By contrast, the latency for reading the words printed in color is not reliably increased compared to the same words printed in black. The more automatic process (i.e., reading) impedes the less automatic process (i.e., color naming) but not vice versa. Stroop tasks (for an overview, see MacLeod, 1991) use this asymmetrical effect to assess the ability to inhibit cognitive interference. In AP research, auditory analogs of the Stroop task typically consist of trials where the pitch of a tone is either congruent or incongruent with the sung tone label (integrated stimuli; e.g., Itoh et al., 2005) or a visual note presented simultaneously (non-integrated stimuli; e.g., Akiva-Kabiri and Henik, 2012). In incongruent trials, AP musicians consistently show increased response times for label/note naming compared to congruent trials. Pitch labeling, on the other hand, seems to be less affected by incongruence (Akiva-Kabiri and Henik, 2012). Furthermore, studies have shown that AP musicians perform worse than non-AP musicians in interval identification when given an out-of-tune context (Miyazaki, 1992, 1993) and in recognition of transposed atonal melodies (Miyazaki and Rakowski, 2002). As Dooley and Deutsch (2010, 2011) pointed out, these findings may reflect Stroop-like interference effects rather than a general disadvantage in relative-pitch tasks. Taken together, this suggests that the pitch-labeling process in AP is highly automatic and difficult to suppress.
At a more general level, several neurophysiological studies reported that AP musicians showed increased activity in different brain areas (e.g., in the auditory cortex, the planum temporale, the inferior frontal gyrus, and the DLPFC) during acoustic stimulation compared to non-AP musicians even when not instructed to perform pitch labeling (Zatorre et al., 1998; Ohnishi et al., 2001; Wu et al., 2008; Wengenroth et al., 2014; Burkhard et al., 2019; Leipold et al., 2019a). These findings indicate that tone processing in AP musicians is generally altered and that at least some AP-specific processes might be automatically triggered by musical tones. This assumption received further support from a recent decoding study that found a greater representational similarity in electrophysiological activity between a pure listening task and a labeling task in AP musicians compared to non-AP musicians (Leipold et al., 2019b).
The current study aimed to further explore the automaticity and suppressibility of pitch labeling in AP by examining both electrophysiological and behavioral correlates of another prominent psychological paradigm: the Go/Nogo task. Like Stroop tasks, Go/Nogo tasks are widely used to evaluate executive functions, particularly the capacity for inhibitory control. Typically, participants are instructed to press a button as quickly as possible whenever a target stimulus appears within a series of stimuli (Go) and to withhold the button press when a non-target stimulus appears (Nogo). A prominent advantage of Go/Nogo tasks is that the cognitive processes can be evaluated by both behavioral and well-established electrophysiological measures. The main behavioral measures are failures to inhibit a prepared motor response in Nogo trials (called commission errors or false alarms), failures to respond to the target in Go trials (called omission errors or misses), and response times in Go trials. The main electrophysiological measures are two event-related potential (ERP) components associated with reactive cognitive control: the Nogo-N2 and the Nogo-P3. The Nogo-N2, a negative deflection at frontal-midline sites, peaks around 200–400 ms after stimulus onset. The Nogo-P3, the subsequent frontocentral positive shift, is at its maximum about 300–600 ms after stimulus onset (e.g., Pfefferbaum et al., 1985; Pfefferbaum and Ford, 1988; Jodo and Kayama, 1992; Falkenstein et al., 1995, 1999; Bokura et al., 2001; Folstein and Van Petten, 2008; Gajewski and Falkenstein, 2013). Both the N2 and the P3 are usually evaluated by subtracting the Go ERP from the Nogo ERP (Nogo minus Go). In the following, we will refer to the N2 and P3 of the resulting difference wave as N2d and P3d. The exact cognitive subprocesses of response inhibition reflected by the N2 and P3 have been controversially discussed (for a review, see Huster et al., 2013). While some studies associated the N2 with pre-motor inhibitory processes (Jodo and Kayama, 1992; Falkenstein et al., 1999; Bokura et al., 2001; Gajewski and Falkenstein, 2013), other research indicates that the N2 reflects response activation (Bruin et al., 2001) or conflict monitoring (Nieuwenhuis et al., 2003; Donkers and Van Boxtel, 2004; Yeung et al., 2004; Enriquez-Geppert et al., 2010; Kropotov et al., 2017). The P3 has been suggested to mirror inhibitory processes or the evaluation of successful inhibition (Bokura et al., 2001; Bruin and Wijers, 2002; Donkers and Van Boxtel, 2004; Smith et al., 2008; Enriquez-Geppert et al., 2010; Albert et al., 2013; Kropotov et al., 2017).
In the current study, 54 AP and 51 non-AP musicians completed a cued (two-stimulus) Go/Nogo task with acoustic stimuli (i.e., piano tones and environmental sounds). The cue (i.e., a piano tone) was used to establish a prepotent tendency to respond. A button press was required whenever two identical piano tones were presented in succession (Go condition). In trials where two non-identical piano tones were presented, the button press had to be withheld (Nogo condition). Two variations of the Nogo condition were included. In the first Nogo condition, the two presented piano tones differed by at least one semitone (100–700 cents). In the second Nogo condition, the two piano tones differed by only a quarter semitone (25 cents). Using these two Nogo conditions allowed us to study different aspects of pitch processing in AP: (1) inhibition of a possibly stronger neurophysiological activation induced by tones; and (2) suppressibility of pitch labeling. As described above, acoustic stimulation elicits strong neurophysiological activation in AP musicians. This, in turn, might influence subsequent cognitive processes and their respective neurophysiological correlates. The first Nogo condition was used to test whether the generally altered tone processing affects the subsequent inhibitory processes in AP musicians. The second Nogo condition, on the other hand, might generate a Stroop-like effect: the two piano tones, although slightly different in tone frequency, belonged to the same pitch category and should, therefore, evoke the same pitch label in AP musicians. It has been shown before that AP musicians categorize mistuned tones in their nominal categories (e.g., a mistuned C will still be identified as C; Levitin and Rogers, 2005). If pitch labeling is difficult to suppress, AP musicians are expected to show some signs of increased inhibitory load due to the conflicting information, such as more commission errors and/or larger N2d/P3d amplitudes than non-AP musicians. Also, it has been suggested that AP musicians may have an aversion towards mistuned tones (Levitin and Rogers, 2005; Rogenmoser et al., 2020). This could increase the inhibitory load even further.
Finally, we also included a behavioral audio-visual Stroop task to confirm the presence of an incongruence effect as reported in previous studies in our sample of AP participants.
Materials and Methods
Participants
All 105 participants were recruited within a larger research project investigating the neural correlates of AP (Greber et al., 2018, 2020; Brauchli et al., 2019, 2020; Burkhard et al., 2019, 2020; Leipold et al., 2019a,b,c) and were professional musicians, music students, or highly-trained amateur musicians. In total, 54 musicians with AP and 51 musicians without AP participated in this study. The age of the participants ranged from 18 to 44 years. Participants were assigned to one of the two groups based on self-report in the initial online application form. This assignment was validated by an online pitch-labeling task (described below). If someone had self-identified as AP possessor but scored around or below the chance level of 8.3% in the pitch-labeling task, they were not invited to participate in the study. If someone had self-identified as non-AP possessor, which was confirmed again in the laboratory, and nonetheless achieved high scores in the pitch-labeling task, they were neither excluded from the study nor reassigned to the AP group.
Before being invited to the electroencephalography (EEG) recording, participants also filled out an online questionnaire assessing demographical information and musical experience. Based on these data, the two groups were matched for sex, age, handedness, age of onset of musical training, and cumulative hours of musical training over the lifespan.
None of the participants reported any neurological, severe psychiatric, or audiological disorders. We confirmed normal hearing thresholds in all participants using pure-tone audiometry (MAICO ST 20, MAICO Diagnostic, GmBh, Berlin) and validated self-reported handedness using a German translation of the Annett Handedness Questionnaire (Annett, 1970). Crystallized intelligence was evaluated with the Mehrfachwahl-Wortschatz-Intelligenztest (MWT-B; Lehrl, 2005), and fluid intelligence was evaluated with the Kurztest für Allgemeine Basisgrössen der Informationsverarbeitung (KAI: Lehrl and Fischer, 1992). Musical aptitude was estimated using the Advanced Measures of Music Audiation (AMMA; Gordon, 1989). The AMMA consists of 30 pairs of piano melodies. Participants are asked to decide whether the two melodies are identical, different in rhythmical patterns, or different in tonal patterns. The test results in a rhythmical score, a tonal score, and a total score (equals the sum of rhythmical and tonal score). Participant characteristics for the two groups are given in Table 1.
The study was approved by the ethics committee of the canton of Zurich1 and was conducted in accordance with the Declaration of Helsinki. All participants provided written informed consent and received payment for their participation.
Pitch-Labeling Task
As described above, participants completed an online pitch-labeling task at home before being invited to the laboratory. During the task (adapted from Oechslin et al., 2010), participants were instructed to identify both the pitch chroma (class, e.g., C) and the pitch height (octave, e.g., 4) of 108 pure tones. Tones ranged from C3 to B5 (tuning: A4 = 440 Hz) and had a duration of 500 ms. Immediately before and after each tone, 2,000 ms of Brownian noise were presented. In total, each tone was presented three times in a pseudorandomized order, ensuring that tones were not repeated in consecutive trials. Participants responded by selecting a label from a list of all possible labels (C3 to B5) within a maximal trial duration of 15,000 ms. Following the same scoring procedure as the other studies within the AP project (Greber et al., 2018, 2020; Brauchli et al., 2019, 2020; Burkhard et al., 2019, 2020; Leipold et al., 2019a,b,c), we quantified pitch-labeling ability as the percentage of correctly identified pitch classes without considering octave errors (Deutsch, 2013). We did not assign full or partial points to semitone errors. Accordingly, the chance level was at 8.3%.
Auditory-Visual Stroop Task
An auditory-visual Stroop task (Stroop, 1935) was administered in the laboratory to assess the automaticity of pitch labeling (Allport et al., 1994; Itoh et al., 2005; Akiva-Kabiri and Henik, 2012; Schulze et al., 2013). This task has already been reported in another study for the same sample within the larger project on AP (Leipold et al., 2019b). During the task, auditory and visual stimuli were presented simultaneously. Both auditory and visual stimuli corresponded to C4, D4, E4, F4, and G4. The five auditory stimuli were pure tones with a duration of 500 ms (10 ms linear fade-in; 50 ms linear fade-out), created using Audacity (version 2.1.2)2. The visual stimuli consisted of the matching musical notations as quarter notes in treble clef. During the simultaneous presentation, the label of the tone and the name of the musical notation were either congruent or incongruent. Participants were asked to identify the visually presented musical notations as fast and accurately as possible by button press (keys labeled as C, D, E, F, or G) and to ignore the acoustically presented tones. If pitch labeling in AP musicians is automatic and difficult to suppress, AP musicians are expected to experience more cognitive interference in incongruent trials than non-AP musicians. This would be reflected by greater differences in response time between congruent and incongruent trials in AP musicians.
Response times were averaged separately for each participant and condition. Incorrect trials and response times that deviated by more than two standard deviations from the corresponding participant-and-condition-specific mean were excluded from the analysis. For each participant, we subtracted the mean response time of the congruent trials from the mean response time of the incongruent trials to quantify the Stroop effect. These differences in response times between congruent and incongruent trials were subjected to statistical group comparison.
EEG Experiment: Auditory Go/Nogo Continuous Performance Task
During EEG recording, participants performed an auditory continuous performance task (ACPT) requiring Go/Nogo decisions. The auditory stimuli consisted of piano tones and environmental sounds. We used piano tones instead of pure tones for this task because the pitches of piano tones are usually easier to identify than those of piano tones (Miyazaki, 1989; Van Hedger and Nusbaum, 2018; Gruhn et al., 2019). Easy recognition of the pitch class was essential so that conflicting information regarding the sameness of the stimuli could potentially arise in the second Nogo condition.
Initially, five white-key piano tones (C4, D4, E4, F4, and G4) and 10 environmental sounds (e.g., water splashes, knocking on wood) were recorded. These auditory stimuli were then preprocessed using the Audacity software (Version 2.1.2)2. They were all shortened to 500 ms and normalized. A linear fade-in and a linear fade-out were applied to the first and last 100 ms respectively. Additional mistuned piano tones were generated by shifting the pitch of each of the originally recorded piano tones semitone (=25 cents) to sharp and to flat. In total, five in-tune piano tones, 10 mistuned piano tones, and 10 environmental sounds were used in the experiment.
The ACPT task consisted of 400 trials. Before starting the task, participants were asked to perform a few practice trials to check whether they had understood the task instruction. After 200 trials, participants were allowed to take a short break.
In each trial, two of the auditory stimuli were presented one after the other via Bose Companion 2 Series III external speakers (Bose Corporation, Framingham, MA, USA) at a sound pressure of approximately 75 dB using the ERPrec software (Version 2.0.x, Bee Medic GmbH, Germany). Trials lasted 3,800 ms with an interstimulus interval of 1,000 ms. The first stimulus was presented 300 ms after the start of the trial for a duration of 500 ms. After 500-ms presentation of the second stimulus, participants were given 1,500 ms to indicate a response. A black fixation cross on a white background was presented on the screen during the entire task.
There were four different trial categories: Go trials, Nogo trials with in-tune tones (Nogoit), Nogo trials with mistuned tones (Nogomt), and Ignore trials (compare Figure 1). All four trial categories were presented in randomized order and with equal probability (100 trials each). Participants were instructed to press the left mouse button with the right index finger as quickly and as accurately as possible whenever two identical piano tones were presented successively. The first stimulus was either an in-tune piano tone or an environmental sound. Thus, a piano tone as the first stimulus served as a cue for a potential button press, whereas an environmental sound indicated that no action was necessary (Ignore trial). In Go trials, the first piano tone was followed by an identical piano tone, thus requiring a button press. In Nogoit trials, the second stimulus was also an in-tune piano tone but belonging to a different pitch class (e.g., E4 followed by G4). In Nogomt trials, the second stimulus was one of the slightly mistuned variants of the first stimulus (e.g., E4 followed by the 25-cents-sharp deviation of E4). In both Nogoit and Nogomt trials, participants had to withhold pressing the button. In Nogomt trials, pitch labels of the two successive tones were identical, but pitch height was not. Thus, in the case of automatic pitch labeling, these trials contain conflicting information about the sameness of the two stimuli. If potential automatic labeling interferes with the task demands, AP musicians should demonstrate signs of a higher inhibitory load (i.e., larger N2dmt or P3dmt amplitudes, and/or higher error rates) compared to non-AP musicians in Nogomt trials.
Figure 1. Graphical representation of the four trial categories in the auditory continuous performance task (ACPT). Participants were instructed to press a mouse button as fast and accurately as possible when a piano tone was followed by an identical piano tone (Go, depicted in green). When the second piano tone was a non-identical in-tune piano tone (Nogoit, depicted in violet) or a mistuned variant of the first piano tone (Nogomt, depicted in orange; 25-cents deviation of the first stimulus), participants had to inhibit the prepared response. When the first stimulus was an environmental noise instead of a piano tone, no response had to be prepared (Ignore, depicted in black). Stimuli had a duration of 500 ms, and the interstimulus interval had a duration of 1,000 ms.
Performance in the ACPT task was quantified as mean response time, number of omission errors, and number of commission errors. Response times were analyzed for correct Go trials and were measured as the time elapsed between the onset of the second stimulus and button press. Response times shorter than 200 ms and longer than 1,500 ms were not included in the average. Failures to respond in Go trials were counted as omission errors. Failures to inhibit a button press in Nogoit and Nogomt trials were counted as commission errors. Trials in which a button press occurred between the first and the second stimulus were excluded from the behavioral analysis.
EEG Recording and Preprocessing
For the EEG recording, participants were seated in an electrically shielded room and were instructed to fixate their gaze on a black cross on a white screen during the task. Before the experimental task, 3 min of eyes-open and 3 min of eyes-closed resting state were acquired. Continuous EEG was recorded from 31 scalp sites using an electrode cap (Comby EEG Cap, Pamel, Croatia), a Neuroamp®x39 amplifier (Bee Medic GmbH, Germany), and the ERPrec recording software (Version 2.0.x, Bee Medic GmbH, Germany). The silver/silver chloride electrodes were placed according to a subset of the 10/10 system (Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T3, C3, Cz, C4, T4, TP7, CP3, CPz, CP4, TP8, T5, P3, Pz, P4, T6, O1, Oz, O2) and referenced to linked earlobes. Impedances of all electrodes were kept below 10 kΩ using an abrasive, electrically conductive gel (OneStep EEG-Gel, H + H Medizinprodukt GbR, Germany). The sampling rate was 500 Hz and no online filters were applied. After recording, data was converted to EDF+ using the xdf2eeg file converter implemented in the ERPrec software. During file conversion, a high-pass filter (Butterworth, 1st order) of 0.16 Hz and a fixed range scaling factor were applied to the EEG signal.
The converted data was subsequently preprocessed using the BrainVision Analyzer software package (Version 2.1, BrainProducts, Germany)3. First, the data was filtered with a bandpass filter (Butterworth, 8th order) of 1–30 Hz and a notch filter of 50 Hz. Next, eye movement artifacts were corrected using a restricted infomax independent component analysis (ICA; Jung et al., 2000). Noisy channels were excluded from the ICA and interpolated after ICA correction. Remaining artifacts were marked using an automatic raw data inspection with the following exclusion criteria: amplitude gradient >50 μV/ms, amplitude difference >100 μV within an interval of 200 ms, amplitude <−100 μV, amplitude >+100 μV, and activity <0.5 μV within an interval of 100 ms.
Then, ERPs evoked by the second stimulus were computed separately for the three cued conditions (Go, Nogoit, and Nogomt) and for each participant. The EEG signal was divided into artifact-free segments of 1,100 ms (−100 to +1,000 ms from the onset of the second stimulus), and baseline correction (−100 to 0 from the onset of the second stimulus) was applied. Only trials with a correct response (button press in Go; no button press in Nogoit and Nogomt) were included in the ERP averages. Grand and group averages of the ERPs at Fz, Cz, and Pz are shown in Figure 2. Supplementary Figure 1 shows the grand averages of the ERPs at all 31 electrodes. Supplementary Figures 2, 3, 4 show the ERPs at all 31 electrodes separately for the two groups.
Figure 2. Grand averages of the event-relatedpotentials (ERPs) evoked by the second stimulus. (A) Grand averages over all participants for the three cued conditions (Go in green, Nogoit in violet, and Nogomt in orange). Shaded areas depict the 95% within-subject confidence interval. (B) Grand averages computed separately for AP musicians (red) and non-AP musicians (blue). Shaded areas depict the 95% between-subject confidence interval.
Two difference waves were computed by subtracting the participant-specific ERP evoked in the Go condition from the participant-specific ERPs evoked in the two inhibition conditions (Nogoit minus Go and Nogomt minus Go). The N2 and P3 ERP components on the difference waves (N2d and P3d) were quantified as mean amplitudes at the three midline electrodes Fz, Cz, and Pz. Compared to peak amplitudes, mean amplitudes are more robust, less affected by latency variability between trials, and not biased by the noise level and the number of trials (Clayson et al., 2013; Luck, 2014). The definition of the time windows was based on the grand averages of the difference waves over all participants at electrode Cz (compare Figure 3). Because the onset and expansion of N2d and P3d differed between the two conditions, separate time windows were selected for Nogoit-Go and Nogomt-Go. From now on, ERP components obtained from the difference wave between Nogoit and Go will be referred to as N2dit and P3dit. ERP components obtained from the difference wave between Nogomt and Go will be referred to as N2dmt and P3dmt. Mean amplitudes were computed for N2dit between 100 and 140 ms, for P3dit between 180 and 420 ms, for N2dmt between 150 and 270 ms, and for P3dmt between 320 and 660 ms after stimulus onset. Time windows and topographies of the components are shown in Figures 3A,B, respectively. Visualizations of topographies and ERPs were created using functions from the R package EEGutils (Craddock, 2018). Supplementary Figure 5 shows the difference waves at all 31 electrodes averaged across all participants. Supplementary Figures 6, 7 show the difference waves at all 31 electrodes separately for the two groups.
Figure 3. Grand averages and topographies of the difference waves (Nogoit minus Go, Nogomt minus Go). (A) Grand averages of the two difference waves are shown separately for the two groups at electrode Cz (red: AP musicians, blue: non-AP musicians). Time windows used for the computation of mean amplitudes are indicated by the gray-shaded areas. (B) Topographies for the four ERP components of interest (N2dit, P3dit, N2dmt, and P3dmt) are shown separately for AP and non-AP musicians.
Mean amplitudes and time series of the difference wave ERPs at electrodes Fz, Cz, and Pz were exported for statistical analyses.
Statistical Analysis
All statistical analyses were performed using R (version 3.4.34; R Core Team, 2017). The significance level was set to α = 0.05 for all statistical analyses unless stated otherwise. We compared the participant characteristics, the AMMA scores (rhythmical, tonal, and total), the Stroop effect, and the pitch-labeling score between the two groups using two-tailed Welch’s t-tests. Additionally, we computed Pearson correlations between the pitch-labeling scores and the Stroop effect across all participants as well as within each group.
Group differences in the performance measures of the ACPT (i.e., mean response times, omission errors, commission errors in Nogoit trials, and commission errors in Nogomt trials) were also evaluated using Welch’s t-tests. Effect sizes for t-tests are given as Cohen’s d (Cohen, 1988).
To analyze group differences in the EEG data, we used two different approaches. First, we performed a traditional ERP-component analysis: we compared the N2d and P3d mean amplitudes between the two groups separately for the Nogoit-Go and the Nogomt-Go condition. For each component (N2dit, P3dit, N2dmt, P3dmt), we computed a 2 × 3 ANOVA with between-subject factor Group (AP and non-AP) and within-subject factor Electrode Site (Fz, Cz, Pz) using the R package ez (version 4.4.05; Lawrence, 2016). P-values and degrees of freedom were adjusted with the Greenhouse-Geisser correction for nonsphericity when appropriate. Effect sizes for the main effects and interactions are given as generalized eta-squared (, Bakeman, 2005). To quantify the relative evidence of the alternative hypothesis (H1) and the null hypothesis (H0), we additionally report Bayes factors for the mean amplitudes. Bayes factors compare the (marginal) likelihood of the data between two hypotheses (i.e., H1 and H0). Contrary to frequentist statistics, this allows for conclusions about the evidence in support of H0 (Dienes, 2011, 2014). The likelihood ratio expressed by a Bayes factor can be interpreted as follows: A BF10 of 5 (or the inverse = BF01 of 0.2) indicates that the observed data is five times more likely under H1 than under H0. To make the interpretation more straightforward for the reader, we report BF10 when the relative evidence is in favor of H1, and BF01 when the relative evidence is in favor of H0.
We computed the Bayes factors using the R package BayesFactor (version 0.9.12-4.26; Morey et al., 2018). We used the default settings implemented in the BayesFactor package for the number of iterations (n = 10,000) and for the prior scale parameter (r = 0.707 for Bayesian t-tests; r = 0.5 for Bayesian ANOVAs). To assess the two main effects of the Bayesian ANOVAs (i.e., group and electrode), the model with one factor (e.g., group + subject) was compared to the model with both factors (e.g., group + electrode + subject). For the interaction effect, the full model (group + electrode + group * electrode + subject) was compared to the model without the interaction effect (group + electrode + subject).
Second, we adopted a more data-driven approach to analyze the difference waves. Using cluster-based permutation tests implemented in the R package permuco (version 1.0.27 ; Frossard and Renaud, 2019), we performed a 2 × 2 ANOVA with between-subject factor Group (AP and non-AP) and within-subject factor Condition (Nogoit-Go and Nogomt-Go) at each time point after stimulus onset (0 to 1,000 ms). This analysis was conducted separately for each of the three electrodes (Fz, Cz, and Pz). To control for multiple comparisons over time-points, threshold-free cluster enhancement (TFCE: Smith and Nichols, 2009; Mensen and Khatami, 2013) was combined with non-parametric maximum permutation statistics. The TFCE procedure incorporates neighborhood information (i.e., time points close to each other tend to correlate) and does not require an arbitrary cluster-forming threshold. The same procedure was repeated 5,000 times using randomly permuted datasets of the original dataset. From each permutation step, the maximal TFCE score was obtained to form an empirical null distribution, to which the TFCE scores from the original datasets were compared.
Results
Demographic and Behavioral Data
The two groups were comparable in age (t(100.97) = 1.33, p = 0.19, d = 0.26), crystallized intelligence (MWT-B: t(102.86) = −1.49, p = 0.14, d = 0.29), fluid intelligence (KAI: t(100.82) = −1.54, p = 0.13, d = 0.30), age of onset of musical training (t(102.42) = −1.20, p = 0.23, d = 0.23), and cumulative musical training hours over the lifespan (t(99.71) = 1.43, p = 0.16, d = 0.28). AP musicians scored slightly higher in the AMMA total score (t(100.99) = 2.14, p = 0.035, d = 0.42). Analyses of the subtests revealed that this effect was driven by higher AMMA tonal scores in AP musicians (t(100.61) = 2.44, p = 0.016, d = 0.48). In the AMMA rhythmical score, AP and non-AP musicians were comparable (t(101.38) = 1.53, p = 0.13, d = 0.30). In the pitch-labeling task, AP musicians performed considerably better than non-AP musicians (t(102.93) = 13.95, p < 0.001, d = 2.72; see Figure 4A). In the auditory-visual Stroop task, AP musicians showed a larger incongruence effect than non-AP musicians (t(102.65) = 2.78, p = 0.007, d = 0.54; see Figure 4B), indicating difficulties to suppress pitch labeling in this task. Across the whole sample, pitch-labeling scores were positively correlated with the size of the incongruence effect in the auditory-visual Stroop task (r = 0.24, p = 0.015). Within the groups, there was no evidence for a relationship between pitch-labeling scores and the size of the Stroop effect (AP: r = −0.12, p = 0.37; RP: r = 0.22, p = 0.12).
Figure 4. Performance in the pitch-labeling task and the auditory-visual Stroop task for AP musicians (n = 54, depicted in red) and non-AP musicians (n = 51, depicted in blue). (A) AP musicians showed a substantially better pitch-labeling ability. The dashed line represents the chance level of 8.3%. (B) The incongruence effect (response time difference between congruent and incongruent trials) in the Stroop task was more pronounced in AP musicians than in non-AP musicians. This indicates that the pitch labeling was difficult to suppress for our sample of AP musicians.
ACPT Performance Data
Musicians with AP and musicians without AP showed comparable error rates for omission errors (t(92.40) = 0.70, p = 0.48, d = 0.14), commission errors in Nogoit trials (mean AP musicians = 0.19, SD AP musicians = 0.44, mean non-AP musicians = 0.22, SD non-AP musicians = 0.54; t(96.23) = −0.32, p = 0.75, d = 0.06), and commission errors in Nogomt trials (mean AP musicians = 1.76, SD AP musicians = 4.83, mean non-AP musicians = 1.96, SD non-AP musicians = 2.69; t(83.95) = −0.27, p = 0.79, d = 0.05). Response times in Go trials were on average slightly longer in AP musicians (mean response time = 781.37 ms, SD = 188.27 ms) than in non-AP musicians (mean response time = 719.94 ms, SD = 159.40 ms), but the difference was not statistically significant (t(101.81) = −1.81, p = 0.073, d = 0.35). Performance measures are shown in Figures 5A,B.
Figure 5. Performance in the ACPT. (A) Response times in Go trials revealed no evidence for a group difference between AP musicians (red) and non-AP musicians (blue). (B) Response error rates for omissions (no button press in Go trials) and false alarms (failure to inhibit button press in Nogoit and Nogomt trials). There was no evidence for a group difference with regard to the three error types.
EEG Data: N2d and P3d Mean Amplitudes
Mean amplitudes of the ERP components are shown in Figure 6. The two-way ANOVA of the N2dit amplitudes did not reveal a main effect of Group (F(1.103) = 0.82, p = 0.37, = 0.006, BF01 = 2.32), a main effect of Electrode (F(1.33,137.16) = 2.16, p = 0.14, = 0.004, BF01 = 4.03), or an interaction effect (F(1.33,137.16) = 0.12, p = 0.80, < 0.001, BF01 = 14.10). The analysis of the P3dit amplitudes also revealed no main effect of Group (F(1.103) = 0.24, p = 0.62, = 0.002, BF01 = 2.68) and no Group × Electrode interaction (F(1.41,145.26) = 0.77, p = 0.42, = 0.001, BF01 = 8.94), but did reveal a main effect of Electrode (F(1.41,145.26) = 145.92, p < 0.001, = 0.21, BF10 = 1.04 * 1037). According to pairwise comparisons, P3dit amplitudes were smaller at Pz (mean = 1.49 μV, SD = 2.84 μV) than at Fz (mean = 5.06 μV, SD = 3.49 μV, t(104) = 11.60, p < 0.001, d = 1.13, BF10 = 2.62 * 1017) and at Cz (mean = 5.03 μV, SD = 3.43 μV, t(104) = 18.89, p < 0.001, d = 1.84, BF10 = 1.23 * 1032).
Figure 6. Mean amplitudes of the ERP components. (A) ERP components of the Nogoit-Go difference wave. AP musicians (depicted in red) and non-AP musicians (depicted in blue) had comparable N2dit and P3dit mean amplitudes. (B) ERP components of the Nogomt-Go difference wave. The 2 × 2 ANOVA revealed a Group × Electrode interaction for the P3dmt component. Post hoc analyses showed no evidence for a difference between AP and non-AP musicians at any electrode site.
The two-way ANOVA of the N2dmt amplitudes similarly revealed a main effect of Electrode (F(1.32,135.71) = 27.50, p < 0.001, = 0.035, BF10 = 2.25 * 108). Again pairwise comparisons showed that amplitudes were less pronounced at electrode Pz (mean = −1.25 μV, SD = 1.93 μV) than at electrode Fz (mean = −2.12 μV, SD = 2.12 μV, t(104) = −5.25, p < 0.001, d = 0.51, BF10 = 1.62 * 104) and electrode Cz (mean = −1.96 μV, SD = 2.02 μV, t(104) = −7.05, p < 0.001, d = 0.69, BF10 = 4.62 * 107). We found no main effect of Group (F(1.103) = 2.87, p = 0.093, = 0.024, BF01 = 1.01) nor an interaction effect (F(1.32,135.71) = 2.93, p = 0.078, = 0.004, BF01 = 1.20) for the N2dmt amplitudes.
The evaluation of the P3dmt amplitudes did not reveal a main effect of Group (F(1.103) = 0.05, p = 0.82, < 0.001, BF01 = 2.52), but did reveal a main effect of Electrode (F(1.50,154.31) = 21.89, p < 0.001, = 0.022, BF10 = 1.89 * 106) and a Group × Electrode interaction (F(1.50,154.31) = 6.22, p = 0.006, = 0.006, BF10 = 12.69). The post hoc t-tests comparing the P3dmt amplitudes between the two Groups at each electrode provided no evidence for a difference between AP and non-AP musicians at any of the three electrodes (Fz: t(100) = −1.40, p = 0.16, d = 0.27, BF01 = 2.01; Cz: t(102.9) = 0.006, p = 0.995, d = 0.001, BF01 = 4.85; Pz: t(101.9) = 0.65, p = 0.52, d = 0.13, BF01 = 4.01).
EEG Data: Cluster-Based Permutation Test
The non-parametric cluster-based permutation tests indicated an effect of condition at all three analyzed electrode sites (p = 0.0002). This corresponded to two clusters in the analyzed time window at each electrode site. At electrode Fz, the first cluster extended from 140 to 364 ms, and the second cluster extended from 382 to 788 ms. At electrode Cz, the effect was driven by a cluster between 136 and 356 ms, and a cluster between 372 and 902 ms. At electrode Pz, the corresponding clusters extended from 168 to 332 ms and from 360 to 806 ms. The clusters are shown in Figure 7. Descriptively, N2d and P3d amplitudes were time-shifted between the two conditions, giving rise to the two detected clusters. Additionally, N2d amplitudes were substantially smaller in the Nogoit-Go condition compared to the Nogomt-Go condition.
Figure 7. Cluster-based permutation tests revealed a main effect of Condition at each of the three electrodes (Fz, Cz, and Pz). Clusters are indicated by grey-shaded areas. The N2d and P3d of the Nogomt-Go difference wave (depicted in orange) appeared later after stimulus onset than the corresponding ERP components of the Nogoit-Go difference wave (depicted in violet). The Nogoit-Go difference wave showed a considerably smaller N2d amplitude at all electrodes as well as a smaller P3d amplitude at electrode Pz than the Nogomt-Go difference wave.
Regarding group differences, the cluster-based permutation tests revealed no evidence for an effect of Group nor an interaction between Group and Condition at any of the three electrode sites. All clusters found for these effects had p-values above 0.15. The difference waves at electrodes Fz, Cz, and Pz are shown in Figure 8 separately for the two conditions and the two groups.
Figure 8. Difference waves separately for the two groups at the three electrodes (Fz, Cz, and Pz). Cluster-based permutations tests provided no evidence for a group difference between AP and non-AP musicians at any of the electrodes of the difference waves. The shaded areas represent the 95% between-subject confidence interval.
Discussion
In the present study, we investigated whether the postulated highly automatic pitch labeling in AP affects subsequent inhibitory processes. We used a cued auditory Go/Nogo task requiring same/different judgments for pairs of consecutively presented piano tones. In Go trials, the two piano tones were identical. In Nogoit trials, the second piano tone was always in-tune and differed at least one semitone from the first piano tone. In Nogomt trials, the second tone was a -semitone mistuned variant of the first piano tone. While the Nogoit condition tested if auditory-related inhibitory processes are generally altered in AP musicians, the Nogomt condition tested more specifically the suppressibility of pitch labeling by introducing contradictory information about the sameness of the stimuli (same pitch class, different tone frequency). We analyzed both behavioral and electrophysiological measures to evaluate a potential change in inhibitory load in AP musicians. For the electrophysiological measures, we adopted two analysis approaches: First, we conducted a traditional ERP analysis using mean amplitudes for the N2d and P3d components. Second, we performed a cluster-based permutation analysis to test the complete ERP segment. Our data did not provide evidence for a group difference in commission errors, N2d amplitudes, P3d amplitudes, or overall difference wave ERPs for either of the two Nogo conditions. There was also no evidence for a group difference in response times and omission errors in Go trials.
Previous neurophysiological studies have repeatedly reported group differences between AP and non-AP musicians during attentive listening without labeling instruction (Zatorre et al., 1998; Ohnishi et al., 2001; Wu et al., 2008; Wengenroth et al., 2014; Burkhard et al., 2019; Leipold et al., 2019a). In the current study, we tested whether these AP-specific alterations in neurophysiological activity modify the need for inhibition. The behavioral and electrophysiological results obtained from the Nogoit trials do not support this hypothesis: The inhibitory processes in an auditory Go/Nogo task do not seem to have been influenced by absolute pitch. Whether or not the differential tone processing affects subsequent cognitive processes and the associated neurophysiological measures, may strongly depend on the specific task. Such a dependence of AP-specific effects on situational factors has previously been demonstrated with regard to the influential finding of reduced P3b amplitudes in AP musicians in active auditory oddball tasks (Klein et al., 1984). Since the P3b is thought to reflect working memory processes (for a review, see Kok, 2001; Polich, 2007), it was inferred from the original finding of smaller P3b amplitudes that AP musicians may not need to update their working memory during the task because they can access permanent pitch templates. Some of the subsequent studies replicated the effect (Hantz et al., 1992; Wayman et al., 1992; Crummer et al., 1994) while others did not (Hantz et al., 1995; Hirose et al., 2002). Bischoff Renninger et al. (2003) integrated these heterogeneous findings by demonstrating that the AP musicians employed different listening strategies (i.e., absolute pitch or relative pitch) depending on task difficulty and task instruction. Active oddball tasks are structured similarly to Go/Nogo tasks, but target tones appear only infrequently (Kropotov, 2009). The instruction to focus on tone-frequency changes in the current study might have encouraged a relative-pitch rather than an absolute-pitch listening strategy, preventing us from observing AP-specific effects.
Furthermore, the listening instruction might not only influence later task-related cognitive processes in AP musicians but also the differential tone processing per se. As described above, group differences between AP and non-AP musicians in neurophysiological activity have been repeatedly observed during attentive listening. However, in several studies, the mismatch negativity, an ERP component evoked during passive listening in passive oddball tasks, did not differ between AP and non-AP musicians (Tervaniemi et al., 1993; Rogenmoser et al., 2015; Greber et al., 2018). Thus, the focus of attention could play a role in whether and to what extent subprocesses of pitch labeling and associated neurophysiological activations are automatically triggered by acoustic stimuli. In the current study, participants did have to pay close attention to the presented tones but were instructed to concentrate on one specific dimension of the stimuli (i.e., the sameness of tone frequency), which was independent of the pitch class. In contrast to findings during attentive listening without an additional task (Burkhard et al., 2019), the visual inspection of the group-averaged ERPs (Figure 2B) shows comparable N1 and P2 amplitudes in response to the Go stimulus for AP and non-AP musicians. This could indicate that the AP-specific neurophysiological activity thought to be automatically induced by simply listening to tones was not present in our task, and, consequently, no additional inhibitory efforts would have been required.
Finally, it is also possible that the measures used in the current study were simply not sensitive enough to capture subtle group differences in inhibition. For instance, a ceiling effect occurred for both AP and non-AP musicians’ task performance, which was most evident in Nogoit trials. Since the musicians had no difficulty solving the task, possibly the small increases in inhibitory loads did not show up in the error rates.
Compared to the Nogoit condition, the Nogomt condition additionally evaluated whether the AP musicians were able to suppress conflicting pitch-labeling information. The assumption that pitch-labeling information is difficult to suppress stems mainly from auditory Stroop tasks (Miyazaki, 2004; Itoh et al., 2005; Hsieh and Saberi, 2008; Akiva-Kabiri and Henik, 2012; Schulze et al., 2013). Usually, AP musicians are asked to name a sung syllable or to label a visually presented note while ignoring the pitch of the sung syllable or a simultaneously presented musical tone. AP musicians reliably show an increased interference effect of incongruent stimuli or incongruent stimulus dimensions compared to non-AP musicians. Consistent with the literature, our behavioral auditory-visual Stroop task evoked a greater incongruence effect in AP musicians than in non-AP musicians. This suggests that the auditory-visual Stroop task did activate pitch-labeling processes in our sample of AP musicians, which then interfered with the labeling of the visual notes.
In Nogomt trials, tone frequency and pitch class of the second stimulus also provided contradictory information in a Stroop-like manner. Hence, we expected that—analogous to the Stroop task—AP musicians would process both the task-relevant (i.e., tone frequency) and the task-irrelevant stimulus dimension (i.e., pitch-class) due to the automaticity of pitch labeling, resulting in an increased inhibitory load compared to non-AP musicians. Instead, we found no evidence for a group difference in behavioral or electrophysiological measures for Nogomt. These results suggest that AP musicians can successfully control irrelevant pitch-label information in the context of a Go/Nogo task with same/different judgments. Given the results from the Stroop task, it appears that the task context and the corresponding task demands might critically influence whether conflicting pitch-labeling information hinders performance. Contrary to incongruent trials in the Stroop task, the pitch label (e.g., “C”) of the second tone—considered in isolation—had no semantic overlap with the required response (i.e., “different”) in the Nogomt trials. Rather the information extracted from the pitch labels of both tones (e.g., C followed by C equals the same pitch label; “same”) did not match the information of the tone frequency comparison (i.e., different tone frequencies; “different”). Contradictory pitch-labeling information might predominantly impair performance when the task itself specifically requires a response of the same semantic category (i.e., a musical label as in naming a visual note or a sung tone label). A recent study investigated the strength of association between pitch information and verbal labels in musicians using a Stroop paradigm (Sharma et al., 2019). The study included three different Stroop tasks that required high pitch/low pitch judgments of sung syllables tuned to either 261.3 Hz (C4; low) or 392 Hz (G4; high). The sung syllables could be congruent or incongruent with the pitch height in terms of English words (/low/ and /high/), English solemnizations (/do/ and /so/), or key notations (/see/ and /jee/). The incongruence effect on response times was attenuated for solemnizations compared to English words in both AP and non-AP musicians. For key notations, there was no evidence for an incongruence effect on response times. It appears that the verbal labels were semantically not as strongly mapped to the high/low response. Most interestingly, this was even the case for AP musicians. Although the sung label (as keycode or solemnization) was semantically incongruent with the pitch-labeling information, they showed comparable incongruence effects on response times as non-AP musicians. Linguistic information conflicting with pitch-labeling information did not further impair the task performance of AP musicians for high/low judgments. It should be noted that the AP musicians but not the non-AP musicians did show a significant incongruence effect on ERP measures in the keycode task. However, the absence of evidence for an incongruence effect in non-AP musicians is not sufficient to conclude that there is a group difference without a direct statistical group comparison of the effect. Taken together, AP musicians may be able to ignore task-irrelevant conflicts with pitch-labeling information depending on the specific task and context. Considering that automatic processes are often described as obligatory, stimulus-driven, and requiring little to no attention (Palmeri et al., 2004; Palmeri, 2006), the findings of the current study may indicate that pitch labeling in AP is less automatic than previously assumed.
It is, however, important to note that the evidence in favor of H0 as indicated by the Bayes factors was only anecdotal or inconclusive (BF01 < 3). To get more conclusive evidence within the Bayesian framework (i.e., that there is no difference between the ERPs of the two groups, or that there is a group difference), an even larger sample would be needed. Unfortunately, due to the rarity of AP as well as the time-consuming and resource-intensive data acquisition in neuroscience, it is challenging for a single research group to recruit large numbers of AP musicians. For future research, collaborations between multiple research groups might be helpful in this regard.
While there was no evidence for a group difference, the cluster-based permutation analysis did reveal an effect of condition for the ERP difference waves. Visual inspection of the two corresponding clusters (compare Figure 7) shows that the N2d of the Nogoit-Go ERP was vanishingly small at all electrodes analyzed (Fz, Cz, and Pz) whereas the N2d of the Nogomt-Go ERP was more pronounced and prolonged. Also, the P3d was latency-shifted and slightly larger for the Nogomt-Go ERP compared to the Nogoit-Go ERP.
Small or even absent N2 effects as in Nogoit have been repeatedly observed in auditory Go/Nogo tasks (Schröger, 1993; Falkenstein et al., 1995, 1999, 2002; Kiefer et al., 1998). Initially, this phenomenon was attributed to the stimulus modality, as visual stimuli seemed to consistently elicit larger N2 effects (e.g., Falkenstein et al., 1995, 1999, 2002). However, later studies could demonstrate that the relative N2 amplitude may depend more on the perceptual overlap between target and non-target stimuli than on the modality (Nieuwenhuis et al., 2004; Azizian et al., 2006; Smith and Douglas, 2011). Non-target stimuli that are more similar to the target stimulus may generate a stronger tendency to (erroneously) respond, and, thus, require greater inhibition efforts (Azizian et al., 2006). Differences in perceptual similarity could explain the N2d condition difference found in the present study. In Nogomt trials, the target and the non-target stimuli were much more similar (-semitone difference) than in Nogoit trials (difference of at least one semitone). This was paralleled by an increase in N2d amplitude. The more pronounced and prolonged N2d for Nogomt then probably shifted the latency of the P3d. The P3d itself likewise showed a larger amplitude for Nogomt than for Nogoit, mainly noticeable at the parietal electrode Pz. Hence, the amplitude of the P3d might have also been sensitive to the degree of perceptual overlap. An increase in both amplitude and latency of Nogo-P3 due to higher stimulus similarity has been previously reported for visual stimuli (Azizian et al., 2006). A second study, on the other hand, found comparable P3 effects for similar and dissimilar stimuli in the auditory and visual domain (Smith and Douglas, 2011). However, even the similar acoustic stimuli there differed by 165 cents (1,000 Hz/1,100 Hz; a difference of about one and a half semitone) compared to 25 cents in the current study. Thus, the P3 effect might only be affected by a very high perceptual overlap.
Even though the condition effect appears to be consistent with previous findings on the perceptual similarity between target and non-target stimuli, we cannot exclude the possibility that the ERPs were additionally modulated by the tuning or mistuning of the tones. In our Go/Nogo task, the first piano tone was always in-tune in Go, Nogoit, and Nogomt trials because the intonation context can influence the pitch classification in AP musicians (Van Hedger et al., 2018). By constantly providing in-tune tones, we hoped to ensure that the AP musicians’ internal intonation matched the intonation of the tones. This, combined with the frequency spacing applied, resulted in the second tones being mistuned in all Nogomt trials and being in-tune in all Nogoit trials. Therefore, we are not able to distinguish the contributions of the tuning of the second stimulus (in-tune vs. mistuned) from the contributions of the frequency distance between the first and second stimulus (≥1 semitone in Nogoit vs. semitone in Nogomt) to the condition difference. To disentangle these two effects, future studies could use mistuned tones with a greater frequency distance to the first stimulus (e.g., D4 followed by a sharp-mistuned F4). This would also allow to include non-musicians in the sample to evaluate the influence of musical experience, which unfortunately was not feasible with the current task paradigm. During pilot-testing, participants without musical training were not able to discriminate the -semitone frequency changes in Nogomt trials. The small number of correct trials resulted in too few EEG segments (i.e., between one and six out of 100 Nogomt trials) to compute reliable ERPs. It is well established that non-musicians have higher discrimination thresholds than musicians (Spiegel and Watson, 1984; Micheyl et al., 2006). We, nevertheless, have deliberately chosen the -semitone difference so that the second tone would still be recognized by AP musicians as belonging to the same pitch category as the first tone. Had we chosen a larger frequency difference (e.g., semitone) to the first tone (e.g., E4), the second tone might have been assigned to a different pitch category (e.g., E4♯ or E4♭).
Further limitations of our study concern the pitch-labeling task. As can be seen in Figure 4A, the pitch-labeling scores overlapped between the two groups, with some self-identified AP musicians performing worse than some self-identified non-AP musicians. This overlap may be attributed to three features of our pitch-labeling task. First, because participants had to choose one out of 36 possible response options, each trial could last up to 15 s. This relatively long response window was shown to be necessary during pilot tests. Unfortunately, this may have allowed some of the non-AP musicians to use their relative-pitch ability to solve the task. One possibility to better distinguish AP and non-AP musicians based on pitch-labeling performance would be to include both response accuracy and latency information in a combined score (as suggested by Bermudez and Zatorre, 2009). The reconstruction of pitch labels based on a relative-pitch strategy is expected to take more time than genuine AP (see also Miyazaki, 1990). In the current study, the online implementation in an unstandardized setting at home (e.g., some participants used a computer mouse, some a touch screen, and others the trackpad of their laptop to submit the responses) in combination with the 36-item multiple-choice format did not allow to collect meaningful response time data. Future studies could reduce the item list by only asking for the pitch chroma irrespective of the octave. For accurate response time measures, the response options could then be arranged in a circular shape with equal distance to the starting point of the cursor (e.g., as done in Sharma et al., 2019). Another possibility for a better distinction between AP and non-AP musicians would be to prevent non-AP musicians from accessing relative-pitch cues. Wengenroth et al. (2014) proposed inserting non-harmonic and distorted interference stimuli between the tones for this purpose. For AP musicians, unpublished data from our lab (n = 39) suggests a strong correlation (r = 0.7) between our online pitch-labeling task and the original paper-pencil test of our group (Oechslin et al., 2010), which had shorter interstimulus intervals (4 s) and was conducted in a controlled setting. Thus, the longer interstimulus interval in the online implementation probably affects non-AP musicians more strongly than AP-musicians.
A second feature of the pitch-labeling task that might have affected the score distribution is the use of pure tone stimuli. Pure tones do not give an advantage to any specific group of instrumentalists based on their familiarity with a timbre (see Takeuchi and Hulse, 1993). However, pitch identification is generally more challenging for pure tones than for instrumental sounds (Miyazaki, 1989; Gruhn et al., 2019). In a study by Van Hedger and Nusbaum (2018), self-reported AP possessors achieved an accuracy between 75% and 100% (mean: 95.4%) for a mixture of piano and guitar tones, but only an accuracy between 25% and 100% (mean: 56.4%) for pure tones. In our sample of AP musicians, the accuracy for pure tones was even slightly higher (range: 36.1%–100%, mean: 76.4%). Therefore, it is very well possible that our AP musicians would similarly have shown higher accuracy rates for instrumental sounds. Future studies might want to consider including both pure and instrumental tones to get a better estimate of pitch-labeling ability.
Third, the tones in the pitch-labeling test were tuned to a standard reference of A4 = 440 Hz. This might have disadvantaged AP musicians who often play music tuned to a different reference tone (e.g., baroque tuning). Studies that categorize AP and non-AP musicians based on pitch-labeling performance could incorporate information about the musicians’ primary reference tone in the scoring procedure.
Within our AP project, the pitch-labeling task was designed as an additional validation tool only. As set out above, it is not optimally suited to distinguish AP musicians from non-AP musicians by itself. Most importantly, using self-report, we did not have to apply an arbitrary cutoff for group construction and did not risk erroneously assigning well-performing non-AP musicians to the AP group.
Conclusion
The current study provided no evidence for an effect of AP on behavioral or neurophysiological measures of inhibition in a cued auditory Go/Nogo task. The results from the Nogomt condition further suggest that AP musicians can suppress pitch-labeling information depending on the task demands. Given the results from the bimodal Stroop task, it remains unclear under which circumstances subprocesses of pitch labeling are activated and to what extent these processes can be considered automatic. While the ERPs were not modulated by AP, there was a condition difference between the two Nogo conditions which probably reflects a modulation by the perceptual similarity between target and non-target stimuli.
Data Availability Statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: Open Science Framework: https://osf.io/f5nkx/.
Ethics Statement
The studies involving human participants were reviewed and approved by Kantonale Ethikkommission Zürich. The patients/participants provided their written informed consent to participate in this study.
Author Contributions
MG and LJ designed research and wrote the article. MG performed research and analyzed data. All authors contributed to the article and approved the submitted version.
Funding
This work was funded by the Swiss National Science Foundation (SNSF), grant number 320030_163149 to LJ.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
We wish to thank our research interns and research assistants Anna Speckert, Chantal Oderbolz, Fabian Demuth, Florence Bernays, Isabel Hotz, Joëlle Albrecht, Kathrin Baur, Laura Keller, Melek Haçan, Nicole Hedinger, Pascal Misala, Petra Meier, Piyush Rauch, Sarah Appenzeller, Tenzin Dotschung, Valerie Hungerbühler, Vanessa Vallesi, and Vivienne Kunz for their invaluable assistance with data collection. We are very grateful to Silvano Sele for his assistance with the statistical analyses and his helpful comments on earlier drafts of the manuscript. We thank Simon Leipold, Christian Brauchli, and Anja Burkhard for their contributions within the larger project on absolute pitch. We would also like to thank Carina Klein, Stefan Elmer, and all members of the Auditory Research Group Zurich (ARGZ) for their feedback on the experimental procedure.
Footnotes
- ^ http://www.kek.zh.ch
- ^ http://www.audacityteam.org/
- ^ https://www.brainproducts.com/
- ^ https://www.r-project.org
- ^ https://cran.r-project.org/web/packages/ez/index.html
- ^ https://cran.r-project.org/web/packages/BayesFactor/index.html
- ^ https://cran.r-project.org/web/packages/permuco/index.html
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnhum.2020.585505/full#supplementary-material.
References
Akiva-Kabiri, L., and Henik, A. (2012). A unique asymmetrical stroop effect in absolute pitch possessors. Exp. Psychol. 59, 272–278. doi: 10.1027/1618-3169/a000153
Albert, J., López-Martín, S., Hinojosa, J. A., and Carretié, L. (2013). Spatiotemporal characterization of response inhibition. NeuroImage 76, 272–281. doi: 10.1016/j.neuroimage.2013.03.011
Allport, A., Styles, E. A., and Hsieh, S. (1994). “Shifting intentional set: exploring the dynamic control of tasks,” in Attention and Performance XV, eds C. Umilta and M. Moscovitch (Cambridge, MA: MIT Press), 421–452.
Annett, M. (1970). A classification of hand preference by association analysis. Br. J. Psychol. 61, 303–321. doi: 10.1111/j.2044-8295.1970.tb01248.x
Azizian, A., Freitas, A. L., Parvaz, M. A., and Squires, N. K. (2006). Beware misleading cues: perceptual similarity modulates the N2/P3 complex. Psychophysiology 43, 253–260. doi: 10.1111/j.1469-8986.2006.00409.x
Baharloo, S., Johnston, P. A., Service, S. K., Gitschier, J., and Freimer, N. B. (1998). Absolute pitch: an approach for identification of genetic and nongenetic components. Am. J. Hum. Genet. 62, 224–231. doi: 10.1086/301704
Bakeman, R. (2005). Recommended effect size statistics for repeated measures designs. Behav. Res. Methods 37, 379–384. doi: 10.3758/bf03192707
Bermudez, P., and Zatorre, R. J. (2009). A distribution of absolute pitch ability as revealed by computerized testing. Music Percept. 27, 89–101. doi: 10.1525/mp.2009.27.2.89
Bischoff Renninger, L., Granot, R. I., and Donchin, E. (2003). Absolute pitch and the P300 component of the event-related potential: an exploration of variables that may account for individual differences. Music Percept. 20, 357–382. doi: 10.1525/mp.2003.20.4.357
Bokura, H., Yamaguchi, S., and Kobayashi, S. (2001). Electrophysiological correlates for response inhibition in a Go/NoGo task. Clin. Neurophysiol. 112, 2224–2232. doi: 10.1016/s1388-2457(01)00691-5
Brauchli, C., Leipold, S., and Jäncke, L. (2019). Univariate and multivariate analyses of functional networks in absolute pitch. NeuroImage 189, 241–247. doi: 10.1016/j.neuroimage.2019.01.021
Brauchli, C., Leipold, S., and Jäncke, L. (2020). Diminished large-scale functional brain networks in absolute pitch during the perception of naturalistic music and audiobooks. NeuroImage 216:116513. doi: 10.1016/j.neuroimage.2019.116513
Bruin, K. J., and Wijers, A. A. (2002). Inhibition, response mode, and stimulus probability: a comparative event-related potential study. Clin. Neurophysiol. 113, 1172–1182. doi: 10.1016/s1388-2457(02)00141-4
Bruin, K. J., Wijers, A. A., and Van Staveren, A. S. J. (2001). Response priming in a go/nogo task: do we have to explain the go/nogo N2 effect in terms of response activation instead of inhibition? Clin. Neurophysiol. 112, 1660–1671. doi: 10.1016/s1388-2457(01)00601-0
Burkhard, A., Elmer, S., and Jäncke, L. (2019). Early tone categorization in absolute pitch musicians is subserved by the right-sided perisylvian brain. Sci. Rep. 9:1419. doi: 10.1038/s41598-018-38273-0
Burkhard, A., Hänggi, J., Elmer, S., and Jäncke, L. (2020). The importance of the fibre tracts connecting the planum temporale in absolute pitch possessors. NeuroImage 211:116590. doi: 10.1016/j.neuroimage.2020.116590
Clayson, P. E., Baldwin, S. A., and Larson, M. J. (2013). How does noise affect amplitude and latency measurement of event-related potentials (ERPs)? A methodological critique and simulation study. Psychophysiology 50, 174–186. doi: 10.1111/psyp.12001
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Science. 2nd Edn. Hillsdale, NJ: Erlbaum.
Craddock, M. (2018). craddm/eegUtils: eegUtils. Available online at: https://doi.org/10.5281/ZENODO.1292902.
Crummer, G. C., Walton, J. P., Wayman, J. W., Hantz, E. C., and Frisina, R. D. (1994). Neural processing of musical timbre by musicians, nonmusicians, and musicians possessing absolute pitch. J. Acoust. Soc. Am. 95, 2720–2727. doi: 10.1121/1.409840
Deutsch, D. (2013). “Absolute pitch,” in The Psychology of Music, ed. D. Deutsch (San Diego, CA: Elsevier), 141–182.
Dienes, Z. (2011). Bayesian versus orthodox statistics: which side are you on? Perspect. Psychol. Sci. 6, 274–290. doi: 10.1177/1745691611406920
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Front. Psychol. 5:781. doi: 10.3389/fpsyg.2014.00781
Donkers, F. C. L., and Van Boxtel, G. J. M. (2004). The N2 in go/no-go tasks reflects conflict monitoring not response inhibition. Brain Cogn. 56, 165–176. doi: 10.1016/j.bandc.2004.04.005
Dooley, K., and Deutsch, D. (2010). Absolute pitch correlates with high performance on musical dictation. J. Acoust. Soc. Am. 128, 890–893. doi: 10.1121/1.3458848
Dooley, K., and Deutsch, D. (2011). Absolute pitch correlates with high performance on interval naming tasks. J. Acoust. Soc. Am. 130, 4097–4104. doi: 10.1121/1.3652861
Enriquez-Geppert, S., Konrad, C., Pantev, C., and Huster, R. J. (2010). Conflict and inhibition differentially affect the N200/P300 complex in a combined go/nogo and stop-signal task. NeuroImage 51, 877–887. doi: 10.1016/j.neuroimage.2010.02.043
Falkenstein, M., Hoormann, J., and Hohnsbein, J. (1999). ERP components in Go/Nogo tasks and their relation to inhibition. Acta Psychol. 101, 267–291. doi: 10.1016/s0001-6918(99)00008-6
Falkenstein, M., Hoormann, J., and Hohnsbein, J. (2002). Inhibition-related ERP components: variation with modality, age, and time-on-task. J. Psychophysiol. 16, 167–175. doi: 10.1027//0269-8803.16.3.167
Falkenstein, M., Koshlykova, N. A., Kiroj, V. N., Hoormann, J., and Hohnsbein, J. (1995). Late ERP components in visual and auditory Go/Nogo tasks. Electroencephalogr. Clin. Neurophysiol. 96, 36–43. doi: 10.1016/0013-4694(94)00182-k
Folstein, J. R., and Van Petten, C. (2008). Influence of cognitive control and mismatch on the N2 component of the ERP: a review. Psychophysiology 45, 152–170. doi: 10.1111/j.1469-8986.2007.00602.x
Frossard, J., and Renaud, O. (2019). permuco: Permutation Tests for Regression, (Repeated Measures) ANOVA/ANCOVA and Comparison of Signals. Available online at: https://cran.r-project.org/package=permuco.
Gajewski, P. D., and Falkenstein, M. (2013). Effects of task complexity on ERP components in Go/Nogo tasks. Int. J. Psychophysiol. 87, 273–278. doi: 10.1016/j.ijpsycho.2012.08.007
Gordon, E. E. (1989). Manual for the Advanced Measures of Music Audiation. Chigaco, IL: GIA Publication.
Greber, M., Klein, C., Leipold, S., Sele, S., and Jäncke, L. (2020). Heterogeneity of EEG resting-state brain networks in absolute pitch. Int. J. Psychophysiol. 157, 11–22. doi: 10.1016/j.ijpsycho.2020.07.007
Greber, M., Rogenmoser, L., Elmer, S., and Jäncke, L. (2018). Electrophysiological correlates of absolute pitch in a passive auditory oddball paradigm: a direct replication attempt. eNeuro 5:ENEURO.0333-18.2018. doi: 10.1523/ENEURO.0333-18.2018
Gregersen, P. K., Kowalsky, E., Kohn, N., and Marvin, E. W. (1999). Absolute pitch: prevalence, ethnic variation, and estimation of the genetic component. Am. J. Hum. Genet. 65, 911–913. doi: 10.1086/302541
Gregersen, P. K., Kowalsky, E., Kohn, N., and Marvin, E. W. (2001). Early childhood music education and predisposition to absolute pitch: teasing apart genes and environment. Am. J. Med. Genet. 98, 280–282. doi: 10.1002/1096-8628(20010122)98:3<280::aid-ajmg1083>3.0.co;2-6
Gruhn, W., Ristmägi, R., Schneider, P., D’Souza, A., and Kiilu, K. (2019). How stable is pitch labeling accuracy in absolute pitch possessors? Empir. Musicol. Rev. 13:110. doi: 10.18061/emr.v13i3-4.6637
Hantz, E. C., Crummer, G. C., Wayman, J. W., Walton, J. P., and Frisina, R. D. (1992). Effects of musical training and absolute pitch on the neural processing of melodic intervals: a P3 event-related potential study. Music Percept. 10, 25–42. doi: 10.2307/40285536
Hantz, E. C., Kreilick, K. G., Braveman, A. L., and Swartz, K. P. (1995). Effects of musical training and absolute pitch on a pitch memory task: an event-related potential study. Psychomusicology 14, 53–76. doi: 10.1037/h0094091
Hirose, H., Kubota, M., Kimura, I., Ohsawa, M., Yumoto, M., and Sakakihara, Y. (2002). People with absolute pitch process tones with producing P300. Neurosci. Lett. 330, 247–250. doi: 10.1016/s0304-3940(02)00812-1
Hsieh, I. H., and Saberi, K. (2008). Language-selective interference with long-term memory for musical pitch. Acta Acust. Acust. 94, 588–593. doi: 10.3813/aaa.918068
Huster, R. J., Enriquez-Geppert, S., Lavallee, C. F., Falkenstein, M., and Herrmann, C. S. (2013). Electroencephalography of response inhibition tasks: functional networks and cognitive contributions. Int. J. Psychophysiol. 87, 217–233. doi: 10.1016/j.ijpsycho.2012.08.001
Itoh, K., Suwazono, S., Arao, H., Miyazaki, K., and Nakada, T. (2005). Electrophysiological correlates of absolute pitch and relative pitch. Cereb. Cortex 15, 760–769. doi: 10.1093/cercor/bhh177
Jodo, E., and Kayama, Y. (1992). Relation of a negative ERP component to response inhibition in a Go/No-go task. Electroencephalogr. Clin. Neurophysiol. 82, 477–482. doi: 10.1016/0013-4694(92)90054-l
Jung, T.-P., Makeig, S., Humphries, C., Lee, T.-W., Mckeown, M. J., Iragui, V., et al. (2000). Removing electroencephalographic artifacts by blind source separation. Psychophysiology 37, 163–178. doi: 10.1111/1469-8986.3720163
Kiefer, M., Marzinzik, F., Weisbrod, M., Scherg, M., and Spitzer, M. (1998). The time course of brain activations during response inhibition: evidence from event-related potentials in a go/no go task. Neuroreport 9, 765–770. doi: 10.1097/00001756-199803090-00037
Klein, M., Coles, M. G. H., and Donchin, E. (1984). People with absolute pitch process tones without producing a P300. Science 223, 1306–1309. doi: 10.1126/science.223.4642.1306
Kok, A. (2001). On the utility of P300 amplitude as a measure of processing capacity. Psychophysiology 38, 557–577. doi: 10.1017/s0048577201990559
Kropotov, J. D. (Ed.) (2009). “Methods: neuronal networks and event-related potentials,” in Quantitative EEG, Event-Related Potentials Neurotherapy (San Diego, CA: Academic Press), 325–365.
Kropotov, J. D., Ponomarev, V. A., Pronina, M., and Jäncke, L. (2017). Functional indexes of reactive cognitive control: ERPs in cued Go/Nogo tasks. Psychophysiology 54, 1899–1915. doi: 10.1111/psyp.12960
Lawrence, M. A. (2016). ez: Easy Analysis and Visualization of Factorial Experiments. Available online at: https://cran.r-project.org/package=ez.
Lehrl, S., and Fischer, B. (1992). Kurztest für Allgemeine Basisgrössen der Informationsverarbeitung (KAI). 3. Aufl. V. Ebersberg: Vless Verlag.
Leipold, S., Brauchli, C., Greber, M., and Jäncke, L. (2019a). Absolute and relative pitch processing in the human brain: neural and behavioral evidence. Brain Struct. Funct. 224, 1723–1738. doi: 10.1007/s00429-019-01872-2
Leipold, S., Greber, M., Sele, S., and Jäncke, L. (2019b). Neural patterns reveal single-trial information on absolute pitch and relative pitch perception. NeuroImage 200, 132–141. doi: 10.1016/j.neuroimage.2019.06.030
Leipold, S., Oderbolz, C., Greber, M., and Jäncke, L. (2019c). A reevaluation of the electrophysiological correlates of absolute pitch and relative pitch: no evidence for an absolute pitch-specific negativity. Int. J. Psychophysiol. 137, 21–31. doi: 10.1016/j.ijpsycho.2018.12.016
Leite, R. B. C., Mota-Rolim, S. A., and Queiroz, C. M. T. (2016). Music proficiency and quantification of absolute pitch: a large-scale study among Brazilian musicians. Front. Neurosci. 10:447. doi: 10.3389/fnins.2016.00447
Levitin, D. J., and Rogers, S. E. (2005). Absolute pitch: perception, coding, and controversies. Trends Cogn. Sci. 9, 26–33. doi: 10.1016/j.tics.2004.11.007
Luck, S. J. (2014). An Introduction to Event-Related Potential Technique. 2nd Edn. Cambridge, MA: MIT Press.
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: an integrative review. Psychol. Bull. 109, 163–203. doi: 10.1037/0033-2909.109.2.163
Mensen, A., and Khatami, R. (2013). Advanced EEG analysis using threshold-free cluster-enhancement and non-parametric statistics. NeuroImage 67, 111–118. doi: 10.1016/j.neuroimage.2012.10.027
Micheyl, C., Delhommeau, K., Perrot, X., and Oxenham, A. J. (2006). Influence of musical and psychoacoustical training on pitch discrimination. Hear. Res. 219, 36–47. doi: 10.1016/j.heares.2006.05.004
Miyazaki, K. (1989). Absolute pitch identification: effects of timbre and pitch region. Music Percept. 7, 1–14. doi: 10.2307/40285445
Miyazaki, K. (1990). The speed of musical pitch identification possessors by absolute-pitch possessors. Music Percept. 8, 177–188. doi: 10.2307/40285495
Miyazaki, K. (1992). Perception of musical intervals by absolute pitch possessors. Music Percept. 9, 413–426. doi: 10.2307/40285562
Miyazaki, K. (1993). Absolute pitch as an inability: identification of musical intervals in a tonal context. Music Percept. 11, 55–71. doi: 10.2307/40285599
Miyazaki, K. (2004). “The auditory Stroop interference and the irrelevant speech/pitch effect: absolute-pitch listeners can’t suppress pitch labeling,” in Proceedings of the 18th International Congress on Acoustics (Kyoto, Japan: Japan Press), 3619–3622.
Miyazaki, K., and Rakowski, A. (2002). Recognition of notated melodies by possessors and nonpossessors of absolute pitch. Percept. Psychophys. 64, 1337–1345. doi: 10.3758/bf03194776
Morey, R. D., Rouder, J. N., and Jamil, T. (2018). Computation of Bayes Factors for Common Designs. R Package Version 0.9.12–4.2. Available online at: https://cran.r-project.org/web/packages/BayesFactor/index.html. Accessed May 31, 2018.
Nieuwenhuis, S., Yeung, N., and Cohen, J. D. (2004). Stimulus modality, perceptual overlap and the go/no-go N2. Psychophysiology 41, 157–160. doi: 10.1046/j.1469-8986.2003.00128.x
Nieuwenhuis, S., Yeung, N., Van Den Wildenberg, W., and Ridderinkhof, K. R. (2003). Electrophysiological correlates of anterior cingulate function in a go/no-go task: Effects of response conflict and trial type frequency. Cogn. Affect. Behav. Neurosci. 3, 17–26. doi: 10.3758/cabn.3.1.17
Oechslin, M. S., Meyer, M., and Jäncke, L. (2010). Absolute pitch-functional evidence of speech-relevant auditory acuity. Cereb. Cortex 20, 447–455. doi: 10.1093/cercor/bhp113
Ohnishi, T., Matsuda, H., Asada, T., Aruga, M., Hirakata, M., Nishikawa, M., et al. (2001). Functional anatomy of musical perception in musicians. Cereb. Cortex 11, 754–760. doi: 10.1093/cercor/11.8.754
Palmeri, T. J. (2006). “Automaticity,” in Encyclopedia of Cognitive Science, ed L. Nadel (Chichester: John Wiley and Sons, Limited), 1–12. doi: 10.1002/0470018860.s00488
Palmeri, T. J., Wong, A. C.-N., and Gauthier, I. (2004). Computational approaches to the development of perceptual expertise. Trends Cogn. Sci. 8, 378–386. doi: 10.1016/j.tics.2004.06.001
Pfefferbaum, A., and Ford, J. M. (1988). ERPs to stimuli requiring response production and inhibition: effects of age, probability and visual noise. Electroencephalogr. Clin. Neurophysiol. 71, 55–63. doi: 10.1016/0168-5597(88)90019-6
Pfefferbaum, A., Ford, J. M., Weller, B. J., and Kopell, B. S. (1985). ERPs to response production and inhibition. Electroencephalogr. Clin. Neurophysiol. 60, 423–434. doi: 10.1016/0013-4694(85)91017-x
Polich, J. (2007). Updating P300: an integrative theory of P3a and P3b. Clin. Neurophysiol. 118, 2128–2148. doi: 10.1016/j.clinph.2007.04.019
Profita, J., and Bidder, T. G. (1988). Perfect pitch. Am. J. Med. Genet. 29, 763–771. doi: 10.1002/ajmg.1320290405
R Core Team. (2017). R: A Language and Environment for Statistical Computing. Available online at: https://www.r-project.org/.
Rogenmoser, L., Elmer, S., and Jäncke, L. (2015). Absolute Pitch: evidence for early cognitive facilitation during passive listening as revealed by reduced P3a amplitudes. J. Cogn. Neurosci. 27, 623–637. doi: 10.1162/jocn_a_00708
Rogenmoser, L., Li, C. H., Jäncke, L., and Schlaug, G. (2020). Auditory aversion in absolute pitch possessors. Biorxiv [Preprint]. doi: 10.1101/2020.06.13.145029
Schröger, E. (1993). Event-related potentials to auditory stimuli following transient shifts of spatial attention in a Go/Nogo task. Biol. Psychol. 36, 183–207. doi: 10.1016/0301-0511(93)90017-3
Schulze, K., Mueller, K., and Koelsch, S. (2013). Auditory stroop and absolute pitch: an fMRI study. Hum. Brain Mapp. 34, 1579–1590. doi: 10.1002/hbm.22010
Sharma, V. V., Thaut, M., Russo, F., Alain, C., and Hutka, S. A. (2019). Absolute pitch and musical expertise modulate neuro-electric and behavioral responses in an auditory stroop paradigm. Front. Neurosci. 13:932. doi: 10.3389/fnins.2019.00932
Smith, J. L., and Douglas, K. M. (2011). On the use of event-related potentials to auditory stimuli in the Go/NoGo task. Psychiatry Res. 193, 177–181. doi: 10.1016/j.pscychresns.2011.03.002
Smith, J. L., Johnstone, S. J., and Barry, R. J. (2008). Movement-related potentials in the Go/NoGo task: the P3 reflects both cognitive and motor inhibition. Clin. Neurophysiol. 119, 704–714. doi: 10.1016/j.clinph.2007.11.042
Smith, S. M., and Nichols, T. E. (2009). Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. NeuroImage 44, 83–98. doi: 10.1016/j.neuroimage.2008.03.061
Spiegel, M. F., and Watson, C. S. (1984). Performance on frequency-discrimination tasks by musicians and nonmusicians. J. Acoust. Soc. Am. 76, 1690–1695. doi: 10.1121/1.391605
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. J. Exp. Psychol. 18, 643–662. doi: 10.1037/h0054651
Takeuchi, H., and Hulse, S. H. (1993). Absolute pitch. Psychol. Bull. 113, 345–361. doi: 10.1037/0033-2909.113.2.345
Tervaniemi, M., Alho, K., Paavilainen, P., Sams, M., and Naatanen, R. (1993). Absolute pitch and event-related brain potentials. Music Percept. 10, 305–316. doi: 10.2307/40285572
Van Hedger, S. C., Heald, S. L. M., Uddin, S., and Nusbaum, H. C. (2018). A note by any other name: intonation context rapidly changes absolute note judgments. J. Exp. Psychol. Hum. Percept. Perform. 44, 1268–1282. doi: 10.1037/xhp0000536
Van Hedger, S. C., and Nusbaum, H. C. (2018). Individual differences in absolute pitch performance: contributions of working memory, musical expertise, and tonal language background. Acta Psychol. 191, 251–260. doi: 10.1016/j.actpsy.2018.10.007
Wayman, J. W., Frisina, R. D., Walton, J. P., Hantz, E. C., and Crummer, G. C. (1992). Effects of musical training and absolute pitch ability on event-related activity in response to sine tones. J. Acoust. Soc. Am. 91, 3527–3531. doi: 10.1121/1.402841
Wengenroth, M., Blatow, M., Heinecke, A., Reinhardt, J., Stippich, C., Hofmann, E., et al. (2014). Increased volume and function of right auditory cortex as a marker for absolute pitch. Cereb. Cortex 24, 1127–1137. doi: 10.1093/cercor/bhs391
Wu, C., Kirk, I. J., Hamm, J. P., and Lim, V. K. (2008). The neural networks involved in pitch labeling of absolute pitch musicians. Neuroreport 19, 851–854. doi: 10.1097/WNR.0b013e3282ff63b1
Yeung, N., Botvinick, M. M., and Cohen, J. D. (2004). The neural basis of error detection: conflict monitoring and the error-related negativity. Psychol. Rev. 111, 931–959. doi: 10.1037/0033-295x.111.4.939
Keywords: absolute pitch (AP), Go/Nogo, musicians, auditory, inhibition, event-related potential (ERP)
Citation: Greber M and Jäncke L (2020) Suppression of Pitch Labeling: No Evidence for an Impact of Absolute Pitch on Behavioral and Neurophysiological Measures of Cognitive Inhibition in an Auditory Go/Nogo Task. Front. Hum. Neurosci. 14:585505. doi: 10.3389/fnhum.2020.585505
Received: 20 July 2020; Accepted: 15 October 2020;
Published: 12 November 2020.
Edited by:
Tetsuo Kida, Institute for Developmental Research, JapanReviewed by:
Stephen Charles Van Hedger, Western University (Canada), CanadaPeter Schneider, Heidelberg University, Germany
Copyright © 2020 Greber and Jäncke. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Marielle Greber, marielle.greber@uzh.ch; Lutz Jäncke, lutz.jaencke@uzh.ch