- 1Department of Speech and Hearing Science, Arizona State University, Tempe, AZ, United States
- 2Interdisciplinary Graduate Neuroscience Program, Arizona State University, Tempe, AZ, United States
- 3Barrow Neurological Institute and St. Joseph's Hospital and Medical Center, Phoenix, AZ, United States
The neurobiology of sentence comprehension is well-studied but the properties and characteristics of sentence processing networks remain unclear and highly debated. Sign languages (i.e., visual-manual languages), like spoken languages, have complex grammatical structures and thus can provide valuable insights into the specificity and function of brain regions supporting sentence comprehension. The present study aims to characterize how these well-studied spoken language networks can adapt in adults to be responsive to sign language sentences, which contain combinatorial semantic and syntactic visual-spatial linguistic information. Twenty native English-speaking undergraduates who had completed introductory American Sign Language (ASL) courses viewed videos of the following conditions during fMRI acquisition: signed sentences, signed word lists, English sentences and English word lists. Overall our results indicate that native language (L1) sentence processing resources are responsive to ASL sentence structures in late L2 learners, but that certain L1 sentence processing regions respond differently to L2 ASL sentences, likely due to the nature of their contribution to language comprehension. For example, L1 sentence regions in Broca's area were significantly more responsive to L2 than L1 sentences, supporting the hypothesis that Broca's area contributes to sentence comprehension as a cognitive resource when increased processing is required. Anterior temporal L1 sentence regions were sensitive to L2 ASL sentence structure, but demonstrated no significant differences in activation to L1 than L2, suggesting its contribution to sentence processing is modality-independent. Posterior superior temporal L1 sentence regions also responded to ASL sentence structure but were more activated by English than ASL sentences. An exploratory analysis of the neural correlates of L2 ASL proficiency indicates that ASL proficiency is positively correlated with increased activations in response to ASL sentences in L1 sentence processing regions. Overall these results suggest that well-established fronto-temporal spoken language networks involved in sentence processing exhibit functional plasticity with late L2 ASL exposure, and thus are adaptable to syntactic structures widely different than those in an individual's native language. Our findings also provide valuable insights into the unique contributions of the inferior frontal and superior temporal regions that are frequently implicated in sentence comprehension but whose exact roles remain highly debated.
Introduction
The neurobiology of sentence comprehension has been extensively studied for decades. Yet, there remains intense debate regarding the nature and specificity of contributions to sentence comprehension of several left fronto-temporal brain regions. The vast majority of the previous work regarding the neural correlates of sentence comprehension has investigated spoken languages. This work has identified a left-lateralized fronto-temporo-parietal network that is activated by the presence of sentence structures, compared to a variety of acoustic controls, with the most common regions of interest in Broca's area (posterior 2/3 of the left inferior frontal gyrus), the posterior superior temporal gyrus, and anterior temporal cortex (Dronkers et al., 2004; Hickok and Poeppel, 2007; Magnusdottir et al., 2013). Although these regions are frequently identified in studies of sentence comprehension, their respective contributions remain controversial. For example, the role of Broca's area has been attributed to cognitive resources including working memory and cognitive control (Just et al., 1996; Kaan and Swaab, 2002; Novick et al., 2005; Rogalsky et al., 2008; Pettigrew and Hillis, 2014), hierarchical structure-building (Friederici, 2009; Makuuchi et al., 2012) and syntax-specific resources (Grodzinsky, 2000; Grodzinsky and Santi, 2008). Anterior temporal contributions have been attributed to combinatorial semantics (Dapretto and Bookheimer, 1999; Vandenberghe et al., 2002; Ferstl et al., 2005; Pallier et al., 2011; Bemis and Pylkkänen, 2013), semantic processing more generally (Wong and Gallate, 2012; Wilson et al., 2014), prosody (Phillips et al., 1998; Adolphs et al., 2002; Friederici et al., 2003; Humphries et al., 2005; Johnstone et al., 2006), and basic syntactic processing (Humphries et al., 2006; Rogalsky et al., 2011; Herrmann et al., 2012). Posterior superior temporal regions also have been implicated by many of these same studies in combinatorial semantics, syntax and prosody (Humphries et al., 2006; Griffiths et al., 2013; Wilson et al., 2014), as well as in lexical and phonological processing (Damasio et al., 2004; Hickok and Poeppel, 2007; Graves et al., 2008).
The present study aims to investigate the response properties of these spoken language sentence-processing regions to American Sign Language (ASL) sentences in normal hearing adults who are novice ASL learners. Previous studies of the neural substrates of sign languages (i.e., visual-manual languages) have provided valuable insights into the specificity and function of language processing brain networks. Sign languages, like spoken languages, have complex grammatical structures, sublexical features, and many other similar linguistic properties (Emmorey et al., 2002; Sandler and Lillo-Martin, 2006). Thus, by comparing across languages in different modalities, major strides have been made regarding understanding the overall organization and properties of the human brain's language systems independent of modality (Emmorey and McCullough, 2009). Studies investigating the neural substrates of sign languages in native deaf signers (i.e., individuals who have been deaf from a young age and learned a sign language from birth or in childhood) have found that native signers and native speakers engage highly overlapping brain networks during both language production and comprehension. For example, functional MRI studies of native deaf signers consistently indicate that ASL sentences activate the classic left hemisphere language network, including Broca's area, and anterior and posterior portions of the left superior temporal gyrus (Neville et al., 1997; Corina and McBurney, 2001; Emmorey et al., 2003; Sakai et al., 2005). In addition, studies of native deaf signers who have focal brain damage due to a stroke indicate that sign aphasias result from lesion patterns very similar to those typically associated with spoken language aphasias (e.g., left frontal damage in Broca's aphasia, left temporal/parietal damage in fluent aphasias, etc.) (Hickok et al., 1998a; Emmorey et al., 2007; Rogalsky et al., 2013).
Previous studies of bimodal bilinguals who acquired both sign and spoken languages early (i.e., before puberty) also indicate substantial overlap between brain regions engaged in comprehending spoken and signed words (Petitto et al., 2000; MacSweeney et al., 2002, 2006, 2008; Leonard et al., 2013). For example, Mayberry et al. (2011) find the same left-lateralized network engaged in lexical-semantic processing for both early acquired spoken and sign languages. Event related potential (ERP) findings also indicate that grammatical and semantic errors elicit similar ERP responses for both sign and spoken languages in native hearing signers (Neville et al., 1997; Bavelier et al., 1998). Together, these sign language findings suggest that when acquired early, spoken and sign languages share abstract linguistic properties and that the components of language processing and acquisition occur largely independent from modality of input (Hickok et al., 1998a; Corina and McBurney, 2001; Emmorey, 2002), although there are some known language modality differences, particularly in parietal regions (Emmorey et al., 2007, 2014; Pa et al., 2008).
Studying late bimodal bilingualism (e.g., a native English-speaking adult learning American Sign Language) can lead to a better understanding of the neurobiology of late second language acquisition (Leonard et al., 2013). For example, spoken language bilingual studies cannot alone determine if how the adult brain adapts to novel lexical-semantic mappings and syntactic structures is dependent upon the modality of presentation, or if, for example, auditory speech regions involved in lexical processing can also adapt to lexical-semantic processing of manual signs in adulthood (i.e., well after any language critical period). No previous studies to our knowledge have investigated syntactic or sentence-level processing in late bimodal bilinguals, but the few existing neuroimaging studies addressing lexical-semantic processing in late bimodal bilingualism are summarized below.
Leonard et al. (2013) examined the neural correlates of late L2 ASL in hearing L1 English speakers who completed 40 h of ASL college-level coursework. Leonard et al. found that a single-word semantic task (word-picture matching) in spoken English, written English, and ASL, all evoked a very similar left-lateralized fronto-temporal network, and that ASL also engaged a right inferior parietal region more so than spoken or written English. Right inferior parietal regions also previously have been identified as an ASL-specific area of activation in early ASL-English bilinguals (Newman et al., 2002). Seemingly in conflict with these findings of right hemisphere engagement during late L2 ASL acquisition, Williams et al.'s. (2016) longitudinal study of word-level processes in adult novice L2 sign language learners found that as a semester of ASL instruction progressed, left hemisphere activation increased while right hemisphere involvement decreased, in part leading to an increase in overlap with L1 neural correlates. This right-to-left hemisphere shift in activation as a function of proficiency is also seen in L2 spoken languages (Dehaene et al., 1997; Meschyan and Hernandez, 2006), and may reflect reduced engagement of domain-general cognitive control and attention resources. For example, lower L2 sign proficiency was associated with bilateral activation in the caudate nucleus and anterior cingulate cortex, neural structures previous implicated in spoken language control and cognitive control more generally (Friederici, 2006; Abutalebi, 2008; Zou et al., 2012). Thus, the initial right parietal involvement but overall decline in right hemisphere involvement may reflect a shift from more domain-general resources to L1 processing networks for ASL L2, in addition to engagement of an “ASL-specific” right parietal region.
Overall the findings from late L2 ASL studies of word-level processing suggest that lexical-semantic networks are amodal and can quickly adapt to lexical-semantic information coming from a novel modality, particularly in individuals with higher L2 sign proficiency. However, it remains unknown if this finding expands to sentence structure: can the brain networks that support sentence-level syntactic processing in spoken languages also adapt to the visual-spatial syntactic cues of a signed language?
One might assume that if lexical resources can adapt to a different modality, then syntactic resources logically could do so in kind. However, spoken language syntactic cues include temporal and/or auditory-verbal information such as word order, conjugation and declension (morphosyntactic cues), punctuation and/or prosodic inflections, while sign language syntactic cues come in the form of visual-spatial information including location in space, movements, and face, head, and body positions. There is a robust literature of findings indicating distinct neural resources that process visual-spatial information compared to verbal information, the most general of which is that the right hemisphere is more tuned to visual-spatial information while the left hemisphere is more specialized for language (Gazzaniga, 1989, 2000). For example, dissociations of visual-spatial and linguistic (including syntactic) impairments after brain injury or disease are well-documented in spoken language users (e.g., Glosser and Goodglass, 1990; Mosidze et al., 1994; Baldo et al., 2001)1. Evidence from spoken language bilingual studies also suggests a possible dissociation between the adaptability of semantic and syntactic neural resources: age of L2 acquisition has a greater effect than proficiency on the overlap of the neural correlates of L1 and L2 syntactic processing, while semantic neural resources are more affected by proficiency than age of acquisition (Weber-Fox and Neville, 1996; Wartenburger et al., 2003). Thus, it is unknown if the visual-spatial nature of syntactic information in sign languages affects the ability of established spoken language syntactic processing resources to adapt to new syntactic cues. Further, it is unclear if understanding the flexibility of spoken L1 sentence-processing resources in ASL L2 is a worthy pursuit to better understand the response properties and adaptability of these syntactic resources critical for human language.
One confound in investigating the neural differences and similarities of a spoken L1 and a late sign L2 is that it may not be clear if effects are due to differences in proficiency, age of acquisition, and/or the languages being in different modalities and thus having different syntactic features. However, there is a large literature in spoken language bilingualism on the variables of proficiency, age of acquisition, and syntactic similarity to inform findings in bimodal bilinguals (for thorough reviews, please see Caffarra et al., 2015 and Kotz, 2009). Some of the most relevant findings in spoken language studies for interpreting our findings in this present study of L2 ASL syntactic processing include: (1) there is a “syntactic similarity effect” in that the similarity of the neural correlates of L1 and L2 are greater for languages that have similar syntactic features and structure types (Ojima et al., 2005; Chen et al., 2007; Jeong et al., 2007), (2) late L2 learners can exhibit native-like ERP signatures during L2 comprehension when proficiency is high (Rossi et al., 2006; Tanner et al., 2013), (3) significant bilateral superior temporal activation is found for most L1 and L2s regardless of proficiency or age of onset although activation in this region is positively correlated with proficiency (Perani et al., 1998; Jeong et al., 2007), and (4) the inferior frontal gyrus is engaged more in late L2 than in L1 comprehension, particularly for low L2 proficiency (Rüschemeyer et al., 2005, 2006; Hernandez et al., 2007; Jeong et al., 2007). Together the last two points suggest what Rüschemeyer et al. (2005, 2006) call a “trade off” between inferior frontal and superior temporal involvement, with greater IFG involvement as a function of L2 and lower proficiency and greater STG involvement for L1 and higher proficiency L2. These findings likely point to the IFG supporting the learning of syntactic rules (as IFG also has been identified in studies of artificial grammar, e.g., Opitz and Friederici, 2004; Friederici et al., 2006).
The present study compares the brain networks engaged in ASL sentence comprehension in adult novice L2 ASL learners to those involved in L1 spoken language sentence comprehension. This work expands the very small literature on the neurobiology of late bimodal second language acquisition; to our knowledge no previous study has examined the brain networks supporting ASL sentence comprehension in late bimodal bilinguals. Our aim is to characterize the functional plasticity of auditory sentence processing regions for sentence structures that have similar grammatical complexity, but are represented visually and visuo-spatially, in hearing adults who are novice sign language users. We hypothesize the following: (1) ASL and English sentences will activate highly overlapping frontal-temporal networks, (2) ASL sentences will elicit more activation in Broca's area than English sentences, likely because of Broca's area contributing to sentence comprehension as a cognitive resource, (3) ASL and English sentences will engage sentence processing resources in anterior and posterior temporal regions to a similar degree, and (4) ASL sentence comprehension proficiency will be negatively correlated with right hemisphere involvement.
Methods
Participants
Twenty participants were recruited from Arizona State University's (ASU) American Sign Language undergraduate program in the Department of Speech and Hearing Science. Participants (one male; age range = 18–31 years, mean age = 21.4 years) met the following criteria: native speaker of American English, mono-lingual, right handed, previous completion of two semester-long (15 week) introductory courses of American Sign Language (i.e., ASL 101 and 102 at ASU or equivalent at a community college) and current enrollment in an upper division ASL course at the time of participation (ASL 201: n = 8, ASL 202: n = 12). The large percentage of participants who were female reflect the gender ratio in the ASL courses sampled. Participants reported no history of neurological or psychological disease, which was corroborated by a clinical neuroradiologist who reviewed the structural MRI scans. This study was carried out in accordance with the recommendations of the Institutional Review Boards of ASU and St. Joseph's Hospital and Medical Center with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the ASU Institutional Review Board and the St. Joseph's Hospital and Medical Center Institutional Review Board.
Stimuli
During fMRI acquisition, blocks of four types of stimuli were presented: ASL sentences, ASL word lists, English sentences, and English word lists. Each are described below. All ASL stimuli were digitally recorded and edited using AdobeSuite Premiere® software.
ASL Sentences
The words used to generate the ASL sentences were taken from common nouns, verbs, adjectives, and adverbs presented in vocabulary lists used by first year ASL courses in which participants were previously enrolled (i.e., ASL 101, ASL 102). From this vocabulary corpus, 15 ASL sentences were generated by a fluent ASL instructor (author P.H.) who added place, movement, and expression (i.e., the inflectional morphology). P.H. has been a college ASL instructor for over 30 years, and is a certified interpreter. The sentences also were reviewed by a native deaf signer (age 22, acquired ASL from birth, not a student or colleague of P.H.).
ASL sentences ranged from 5 to 10 signs long (M = 7.6) and consisted of a variety of sentence structures and clause types (e.g., “Grandmother was upset that father forgot to pick up his daughter from school,” “Yesterday the secretary answered the phone when the boss left”). A total of 15 sentences were used during scanning, while a different 10 sentences were used for the ASL proficiency measure, described below. During scanning each ASL sentence presentation block consisted of one ASL sentence and the mean duration of each block was 6.9 s (5–8 s).
ASL Word Lists
Fifteen ASL word lists were generated from the common nouns utilized in the ASL sentences. Each word list was comprised of seven nouns. All ASL sentences and word lists were signed by the same fluent signer (P.H.). During the word lists, the signer returned her hands to her lap between each signed word. The mean duration of each ASL word list block was 14.5 s (14–16 s).
English Sentences
Thirty English sentences were generated by translating into English the 15 ASL sentences used in ASL sentence blocks, plus 15 additional sentences generated from the same ASL vocabulary corpus. English sentences ranged from 8 to 16 words (M = 11.3). Presentation of the English stimuli are based on Fedorenko et al. (2011) language localizer paradigm: English stimuli were presented as white text against a dark gray background. Each word in the sentence was visually presented for 350 ms. Sentences ranged in duration from 2.8 to 5.6 s (M = 3.9 s). Each English sentence presentation block consisted of two English sentences, which were separated by a 700 ms fixation cross. The duration of the blocks ranged from 8.8 to 9.5 s (M = 9 s).
English Word Lists
Fifteen English word lists were generated by translating the items in the ASL word lists and additional ASL nouns from the vocabulary corpus. Each word list contained 14 nouns. Each word was presented for 350 ms resulting in a total duration of 4.9 s for each word list.
ASL Sentence Comprehension Proficiency Task
Prior to scanning, each participants' proficiency on ASL sentences (created in the same fashion as the ones presented during fMRI acquisition) was assessed outside the scanner. The proficiency test was comprised of 10 novel ASL sentences and each sentence contained an average of 7 (range = 5–11) ASL words from the vocabulary corpus. Participants were instructed to view each video once and translate the ASL sentence into English on paper provided (see Supplementary Video 1 for an example of an ASL sentence video). Once they completed the test, the subject's translations were given two scores: a semantic and a syntactic score. In the semantic scoring, the vocabulary words from each sentence were scored on whether each word was correctly translated or not. If a word was translated incorrectly, the response was categorized as a semantic error, a phonological error (based on the four parameters of sign language, i.e., hand movement, hand shape, position of hand, and orientation of the sign to the body), or omission. In the syntactic scoring, the translations were scored based on the correct relation of each word to the other word in the sentence (e.g., who was the agent, what is modifying what, etc.). The average number of relations in the sentences was 5 (range = 3–6). Correct scoring of the relations was not dependent on comprehension of lexical items. For example, the ASL sentence in Video 1, translated into English as “My uncle was driving fast and he sped past the woman in the car” has four relations to comprehend: (1) who is driving, (2) what is fast, (3) who sped, and (4) who was sped past. Semantic and syntactic scores were calculated as proportion correct of the number of words and number of relations, respectively, averaged across all sentences.
fMRI Experimental Design
The scanning session contained four functional runs with the order counter-balanced across participants. Within each run pseudo-randomized blocks of the four conditions (i.e., ASL sentences, ASL word lists, English sentences, and English word lists) were presented, with rest periods of 12 s interleaved between blocks. In each run, there were equal numbers of blocks from each condition. There were a total of 16 blocks in three of the runs and 12 blocks in one run. After functional scanning, a T1 structural MRI was collected. The whole scanning session lasted approximately 50 min.
fMRI Data Acquisition
MRI scanning was performed on a 3T Philips Achieva scanner at the Keller Center for Imaging Innovation at the Barrow Neurological Institute in Phoenix, Arizona. During scanning, participants laid supine and viewed the display through Nordic Neurolab's MR-compatible high-resolution LED goggles. The visual display was synchronized to image acquisition via Neurolab's sync system with E-prime Software Version 2 (Psychology Software Tools, http://www.pstnet.com/). The parameters for the functional runs were as follows: 35 axial-oblique slices (3 mm thickness), in-plane resolution = 3 × 3 mm, TR = 2 s, TE = 25 ms, flip angle = 80°, FOV = 240 × 240 mm, matrix = 80 × 80, and ascending acquisition. The number of volumes acquired for the four runs were 172, 174, 174, and 132 respectively. The parameters for the high-resolution T1 anatomical scan were as follows: MPRAGE sequence, 170 sagittal slices, TR = 6.742 ms, TE = 3.104 ms, flip angle = 9°, matrix = 256 × 256, voxel size = 1.1 × 1.1 × 1.2 mm.
fMRI Data Preprocessing and Analysis
Image preprocessing and analyses were completed using SPM8 (SPM8, Wellcome Institute of Cognitive Neurology, London, UK). Standard preprocessing steps were implemented, including slice time correction, rigid body motion correction, a high-pass filter at 1/128 Hz to filter low-frequency nonlinear drifts, coregistration of the functional images to each subject's T1 anatomical images, and normalization to the Montreal Neurological Institute (MNI) template. All normalized functional images were smoothed using a Gaussian filter with a full width at half maximum of 8 mm.
Individual subject analyses were conducted by constructing a general linear model for each condition. Four regressors were defined: ASL sentences, ASL word lists, English sentences, and English word lists. For all conditions, the regressors were convolved with a canonical hemodynamic response function (Friston et al., 1994). Voxel-wise repeated measure t-tests were performed using the estimated parameters of the regressors (beta weights) to compare across conditions. For group analysis, a random effects analysis was conducted by incorporating the individual data from the first-level analysis of each task. The group results were overlaid onto the averaged normalized anatomical image of the group by using an SPM extension tool, bspmVIEW (http://www.bobspunt.com/bspmview/). Significant clusters were identified using non-parametric permutation and randomization techniques via the SnPM13 toolbox of SPM12 (Nichols and Holmes, 2002; http://warwick.ac.uk/snpm) and a voxel-wise FWE-threshold of p < 0.05.
To further investigate how L1 English sentence regions respond to L2 ASL sentences, regions of interest (ROIs) were functionally defined based on the contrast of English sentences–English word lists. The amplitude of the response within each ROI for each condition were then plotted and compared with two-way ANOVAs (language x stimulus type).
Lastly, we conducted correlation analyses to examine the relationship between activation to ASL sentences and an individual's ASL proficiency score. To do so we calculated a Pearson r-value in each voxel between the beta values and proficiency scores from each subject. Note that this correlational approach is most likely underpowered in our sample, as power curve estimates computed by Yarkoni and Braver (2010) indicate that a sample size of approximately 50 is needed for sufficient power (0.8) while maintaining a conservative probability of false positives in typical fMRI experiments. Nonetheless we conducted the correlational analyses as this is a difficult population from which to achieve an adequate sample size. Also note that we have used a more relaxed statistical threshold for these results (voxel-wise uncorrected of p < 0.005, with a cluster size threshold = 10 voxels).
Results
ASL Sentence Comprehension Behavioral Measure
ASL sentence comprehension behavioral task performance (proportion correct) was as follows: semantic scores ranged from 0.23 to 0.72 (M = 0.54). Syntactic scores ranged from 0.12 to 0.8 (M = 0.52). There was a strong positive correlation between the semantic and syntactic proficiency scores, r = 0.807, p < 0.001 (Figure 1). Thus, in subsequent fMRI analyses that correlate activations with ASL sentence comprehension abilities, an overall proficiency score (i.e., an average of semantic and syntactic scores for each participant) was used.
Figure 1. Scatterplot of the syntactic and semantic scores on the behavioral ASL sentence comprehension task for each participant.
fMRI Results: L1 English
As observed in previous studies of sentence comprehension (Humphries et al., 2001; Vandenberghe et al., 2002; Rogalsky and Hickok, 2009; Rogalsky et al., 2011), English sentences and word lists each activated large bilateral swaths of cortex in the frontal, temporal and inferior parietal lobes, as well as visual cortex (p < 0.05, FWE-corrected Figure 2A, Table 1). For English sentences, widespread significant activations were identified in the right middle frontal gyrus (MFG), right angular gyrus (AG), bilateral precentral gyrus (PrCG), inferior frontal gyri (IFG, predominately in the pars opercularis), superior temporal gyrus (STG), middle temporal gyrus (MTG), intraparietal sulcus (IPS), lingual gyrus, inferior occipital gyrus (IOG), middle occipital gyrus (MOG), fusiform gyrus, calcarine sulcus, and supplementary motor area (SMA). In the English word list condition, significant activations were found in the bilateral IFG (pars opercularis), precentral gyrus, MFG, IOG, MOG, lingual gyrus, fusiform gyrus, and SMA superior parietal lobule (SPL), right MTG, AG, and insula (Figure 2B, Table 1).
Figure 2. Maps of significant activations for (A) English sentences, (B) English word lists, (C) ASL sentences, and (D) ASL word lists compared to rest, FWE corrected, p < 0.05.
Clusters more active for rest than the English sentences were identified bilaterally in the superior frontal gyrus (SFG), AG, cuneus, cingulate gyrus, medial frontal gyrus (MeG) and pons, in the left MOG and STG, and in the R MFG (Figure 2A; Table 1). Clusters more active for rest than the English word lists were found in the bilateral CG, cuneus and medial occipital lobe, including LG and MOG, as well as a small cluster (4 voxels) in the right STG (Figure 2B; Table 1).
A voxel-wise t-test of English sentences vs. English word lists revealed increased activation for English sentences compared to word lists in the bilateral temporal lobes in the STG and MTG (right anterior temporal lobe, left activation spanned the length of the temporal lobe), as well as the left IFG (pars opercularis and pars triangularis) and MeG (Figure 3A; Table 1). Clusters more active for the English word lists than the English sentences were identified in bilateral CG and occipital regions including MG, SOG, and FG, as well as in the right inferior parietal lobule (Figure 3B).
Figure 3. Contrast results for (A) English sentences > English word lists, (B) English word lists > English sentences, (C) ASL sentences > ASL word lists, (D) ASL word lists > ASL sentences but no voxels survived correction, (E) conjunction of ASL sentences > rest and English sentences > rest, and (F) ASL sentences vs. English sentences; in (F), warmer colors indicate greater activation for ASL sentences and cooler colors indicate greater activation for English sentences (FWE corrected, p < 0.05).
fMRI Results: L2 ASL
Large swaths of activations were found in response to ASL sentences compared to rest in the bilateral visual cortex, extending both into posterior superior temporal and inferior parietal cortex, as well as the SPL and postcentral gyrus. Bilateral frontal regions including IFG (pars orbitalis, opercularis and triangularis), MFG, precentral gyrus and SMA also contained significant activations, in addition to bilateral hippocampus and left pallidum (p < 0.05, FWE-corrected, Figure 2C, Table 2). ASL word lists compared to rest activated a similar set of regions as the ASL sentences, including bilateral SPL, IPL, cuneus,PrCG, IFG, MFG, insula, SOG, inferior temporal gyrus (ITG), FG, hippocampus, and SMA (Figure 2D, Table 2).
Clusters more active during rest than ASL sentences were identified in the bilateral AG, STG, MTG, CG, MFG, MeFG, hippocampus, and insula, as well as the right putamen (Figure 2C, Table 2). Clusters more active during rest than ASL word lists include portions of bilateral STG, Heschl's gyrus, AG, MFG, SFG, MeG, and CG, precuneus, as well as left caudate (Figure 2D, Table 2).
A voxel-wise t-test of ASL sentences vs. ASL word lists identified clusters in bilateral STG, MTG, SMA, in occipital cortex with peaks in FG, IOG, precuneus and in the calcarine sulcus, as well as in the left pre and postcentral gyri (Figure 3C, Table 2). No regions were more active in response to ASL word lists than ASL sentences (Figure 3D).
fMRI Results: L2 ASL vs. L1 English
A conjunction map of the brain regions significantly activated by ASL sentences and by English sentences was generated to describe shared vs. distinct sentence processing regions for the two types of sentences (Figure 3E). Areas of overlap were found bilaterally in the frontal lobe (bilateral IFG–predominately pars opercularis, MFG and precentral gyrus), posterior superior and middle temporal gyri, and bilateral occipital visual cortex. Notably the spatial extent of activation was greater for ASL sentences than English sentences in the inferior parietal lobe, superior parietal lobule, and inferior frontal gyrus. The response to English sentences compared to ASL sentences extended more anteriorly along the left superior temporal sulcus, although ASL sentences did elicit significant activation in a small cluster in the left anterior temporal lobe (45 voxels) (Figure 3E).
To further examine the differences between the neuroanatomy supporting the comprehension of ASL sentences and English sentences, we also directly compared the activation between these two conditions in a voxel-wise t-test (Figure 3F; Table 3). ASL sentences yielded significantly greater activation than English sentences in bilateral SPL, occipital-inferior temporal cortex including SOG, ITG and the calcarine sulcus, IFG (both pars opercularis and pars triangularis in the left hemisphere), insula, and SMA, as well as in the left PrCG, and right hippocampus. English sentences yielded significantly greater activation than ASL sentences in bilateral MTG, AG, IPL, and occipital regions including peaks in the LG and IOG (Figure 3F; Table 3).
Response of L1 English Regions to L2 ASL Sentences
To explore how ASL sentences engage English sentence processing regions, we plotted the mean percent signal change during each condition for the regions identified to be significantly more activated for English sentences than word lists, (p < 0. 05, FWE-corrected). As described in section fMRI Results: L2 ASL vs. L1 English, the regions identified were in the left inferior frontal gyrus, left anterior and posterior superior temporal gyrus, and right anterior temporal gyrus (Figure 4). 2 × 2 ANOVAs were computed for each ROI. The results are as follows (α = .05): The left anterior temporal ROI (Figure 4B) exhibited a significant main effect of sentence structure, F(1, 19) = 67.4, p < 0.001; English and ASL sentences exhibit greater activation than word lists in their respective modality. The main effect of language was not significant, F(1, 19) = 3.5, p = 0.07. There was no significant interaction in the left anterior temporal ROI [F(1, 19) = 1.75, p = 0.20]. The left posterior STG ROI (Figure 4C) exhibited a main effect for stimulus type [sentences >word lists; F(1, 19) = 98.7, p < 0.001], and main effect for modality [English >ASL, F(1, 19) = 7.5, p = 0.01]; the interaction was not significant (p = 0.24). These findings suggest that this left posterior temporal region is sensitive to sentence structure in both L1 and L2, but with a preference for L1. The left IFG ROI (Figure 4D) exhibited a significant main effect for sentence structure [F(1, 19) = 28.2, p < 0.001] with sentences activating the ROI more than word lists, as well as a significant main effect of language [F(1, 19) = 18.8, p < 0.001], with ASL significantly activating more than English; the interaction was not significant (p = 0.23). The right anterior temporal ROI (Figure 4E) exhibited a significant main effect for sentences than word lists, [F(1, 19) = 54.3, p < 0.001], but no significant main effect of language (p = 0.21) or interaction (p = 0.33). To summarize these ROI results, no ROIs showed an interaction, only the L pSTG ROI was significantly activated more for English than ASL, and the reverse was found in the L IFG, in which ASL led to greater activation than English. All of the ROIs were significantly more activated for sentences than word lists. These findings suggest that L1 English sentence comprehension networks are also engaged during L2 ASL sentence comprehension, albeit to varying degrees.
Figure 4. (A) ROIs defined by English sentences > English word lists (FWE corrected, p < 0.05). (B–E) Graphs of the response to each condition of select ROIs depicted in (A). Error bars represent standard error of the mean.
Exploratory fMRI Results: ASL Proficiency
To determine whether the engagement of English sentence processing regions during ASL sentence comprehension is dependent on level of ASL proficiency, we conducted correlation analyses to examine the relationship between activation to ASL sentences and an individual's ASL proficiency score. Although this approach is likely underpowered in our sample (Yarkoni and Braver, 2010), we nonetheless present results as this is a difficult population from which to achieve an adequate sample size. The following results are therefore exploratory in nature and null results should be interpreted with great caution.
A positive correlation between ASL proficiency score and activation in response to ASL sentences compared to rest was found in the left IFG (pars opercularis) (peak r = −58 14 26), left IFG (pars orbitalis) (−44 34 −2), left posterior STG (−46 −40 16), and the right MFG (58 4 28) (p < 0.005, minimum cluster size = 10 voxels Figure 5A). These positive correlations suggest that the activation of these regions while viewing ASL sentences is associated with increased levels of ASL proficiency. Note that all of these regions identified as related to ASL proficiency are a subset of those found to be activated by English sentences compared to rest.
Figure 5. (A) Regions in which activation to ASL sentences > rest is correlated with ASL proficiency. Arrows point to regions with activations that positively correlate with ASL proficiency (B) Regions in which activation to ASL sentences > English sentences is correlated with ASL proficiency. Warmer colors indicate a positive correlation, cooler colors indicate a negative correlation (voxel-wise uncorrected p <.005, cluster size threshold = 10 voxels). Arrows (A,B) point to regions with activations that positively correlate with ASL, arrows (C–E) point to regions that negatively correlate.
A second correlational analysis was conducted to focus more specifically on ASL sentence comprehension; a correlation was computed for the relationship between greater activation to ASL sentences compared to English sentences and an individual's ASL proficiency score. Positive correlations between ASL proficiency and activation in response to ASL sentences (minus English sentences) were observed in the left IFG (pars opercularis) (−58 6 22) and left posterior STG (−64 −42 14) (Figure 5B). A negative correlation in the left temporal-occipital gyrus (extending into the fusiform gyrus) (−38 −68 −14), left superior frontal gyrus (−14 60 28), and left supramarginal/angular gyrus (−46 −68 38) (Figure 5B) was also observed. These negative correlations indicate that greater activation for ASL vs. English in these regions is associated with decreased levels of ASL proficiency. The regions with positive correlations to ASL proficiency also are activated by English sentences, but the regions with negative correlations do not overlap with the activations to English sentences.
Discussion
The present study investigated the brain regions involved in sentence processing of ASL in novice L2 adult hearing signers. Our aim was to determine how L1 spoken language sentence processing networks respond to L2 ASL sentences. Overall our results combined with previous work indicate that ASL syntactic processing engages brain regions that are highly overlapping with typical spoken language L1 language networks, independent of proficiency or age of acquisition. In other words, modality alone does not substantially modulate the location of the neural substrates of sentence processing resources, which leads us to conclude that spoken L1 language processing networks can quickly adapt to L2 syntactic structure that is highly unrelated to the syntactic structure of L1.
These findings coincide with separate previous functional neuroimaging studies of spoken language bilingualism and early bimodal bilingualism, suggesting that age of acquisition and modality may not necessarily impact macro neural organization of L2 (Abutalebi et al., 2001; Emmorey et al., 2016). However, each of the L1 sentence comprehension regions were modulated differently by L2 ASL vs. L1 English sentences, providing valuable insights into the nature of each region's contributions to L1 sentence processing (discussed below). Our exploratory analyses of ASL L2 proficiency indicate that greater ASL proficiency is positively correlated with greater activation of both left frontal and posterior temporal sentence processing regions, and negatively correlated with activation in visual and visual-spatial processing regions. The implications of our findings regarding the response properties and flexibility of sentence-processing networks are discussed below.
Contributions of Age of Acquisition and Proficiency vs. Modality to ASL Sentence Processing
In the present study alone, it is not possible to differentiate between the contributions of age of acquisition, proficiency, and language modality to our findings. However, the relative wealth of literature in spoken language bilingualism investigating effects of age of acquisition and proficiency, and (to a lesser degree) in early bimodal bilingualism investigating modality effects in the neural correlates of sentence processing, allows us to interpret our results in a meaningful way to better understand the nature and response properties of sentence processing regions.
In studies of early bimodal bilinguals, there is a high degree of overlap between the neural correlates of sign and spoken sentence comprehension (Neville et al., 1997; Bavelier et al., 1998) suggesting that the neural resources supporting sentence comprehension are largely independent of modality. In our late L2 ASL signers, we also find a high degree of overlap, particularly in inferior frontal and posterior temporal cortex, suggesting that the functional plasticity of spoken language sentence processing regions also is not dependent upon an early age of acquisition and/or high degree of proficiency. Based on previous work in spoken language bilinguals, we expect this overlap to further increase as proficiency improves, even in late L2 learners (Rossi et al., 2006; Tanner et al., 2013).
The overlap we found in bilateral superior temporal cortex for sentence-specific processing (vs. word lists) in both L1 English and L2 ASL, particularly in pSTG, also speaks to the adaptability of these well-investigated L1 sentence processing regions. Previous work in spoken languages has demonstrated that they are engaged during sentence processing regardless of age of acquisition or proficiency (Perani et al., 1998; Jeong et al., 2007), and bilateral STG involvement also has been found in ASL acquired from an early age (Neville et al., 1997). Our study suggests that even in late L2 learning, bilateral superior temporal cortex is responsive to syntactic structures in a different modality.
The age of L2 acquisition is quite similar across our participants, as they all were young adult undergraduates. Thus, any differences detected in our exploratory proficiency correlational analyses are likely due to proficiency differences, not age of acquisition. While there is some variability in proficiency in our sample (Figure 1), certainly future longitudinal studies are needed to better understand how L2 proficiency would affect our findings. But, within our sample, an interesting pattern of activations in the left hemisphere was found to correlate with ASL proficiency: greater activations to ASL sentences than English sentences in regions responsive to English sentences in Broca's area and the pSTG (i.e., regions activated by English sentences in the present study and in numerous previous studies) were associated with greater ASL proficiency. Conversely, greater activation to ASL sentences compared to English sentences in visual-spatial occipital and parietal left hemisphere regions were significantly correlated with lower ASL proficiency. This intra-hemispheric difference in contributions of visual-spatial regions and L1 language regions as a function of ASL proficiency in our late L2 signers suggests that their ASL exposure may be sufficient to expand the response properties of L1 spoken language regions to L2 ASL. The alternative explanation would be that our more proficient ASL signers naturally engage L1 language regions in response to ASL even prior to the development of ASL proficiency. However, this explanation is unlikely to be correct: previous work indicates that in response to ASL vs. non-linguistic biological motion, non-signers exhibit significantly more activation in visual processing regions such as the occipito-temporo-parietal junction, whereas signer significantly more activate known peri-sylvian language regions (Malaia et al., 2012).
In the next three sections, we review in more detail the findings in the three regions most frequently implicated in sentence processing in the neurobiology of language literature more generally.
Broca's Area
A portion of Broca's area, the pars opercularis, was found to be more activated by English sentences compared to English word lists. This finding coincides with numerous previous neuroimaging studies of L1 spoken languages that report greater Broca's area activation for reading sentences than word lists (Fedorenko et al., 2010, 2011, 2012a,b,c; Fedorenko and Kanwisher, 2011; Blank et al., 2016), as well as with previous neuroimaging and electrophysiological studies of L1 ASL sentence comprehension (Neville et al., 1997; Newman et al., 2002, 2010a). The exact nature(s) of the role(s) of Broca's area in sentence comprehension remains highly debated, with prominent hypotheses suggesting syntactic-specific processes (Grodzinsky, 2000; Grodzinsky and Santi, 2008), verbal working memory (Just et al., 1996; Kaan and Swaab, 2002; Rogalsky et al., 2008; Pettigrew and Hillis, 2014), hierarchical structure building (Friederici, 2009; Makuuchi et al., 2012), and cognitive control (Novick et al., 2005) as possible candidates. The present dataset may provide valuable insights into this debate: our ROI plots indicate that the portion of Broca's area engaged in English sentence comprehension (i.e., English sentences >English word lists) also shows preference for ASL sentences compared to word lists. In fact, the English sentence ROI in Broca's area is significantly more activated by ASL sentences than English sentences (Figure 4D). This increased activation for ASL sentences compared to English sentences suggests that this region is not contributing via syntactic movement-specific resources (e.g., Grodzinsky and Santi, 2008), coinciding with previous findings within spoken languages (Rogalsky et al., 2015), but rather this portion of Broca's area is more likely contributing to sentence processing as a cognitive resource, which likely would be taxed more due to increased processing demands to comprehend the less familiar L2 ASL sentences. ASL sentences also activated a cluster in the right hemisphere anatomical homolog of Broca's area, the right IFG, significantly more than English sentences. The right IFG is frequently implicated in L2 spoken language processing (Sebastian et al., 2011; Wei et al., 2015), and increased right IFG activation during language tasks more generally is thought to reflect an increase in cognitive demands and effortful processing (Prat and Just, 2010; Prat et al., 2011; Gan et al., 2013; Mack et al., 2013).
In our exploratory analysis of neural correlates of L2 ASL proficiency, we found that greater activation in Broca's area is significantly correlated with greater ASL proficiency. At first glance, this finding may seem counter to previous findings that IFG involvement is associated with lower L2 proficiency (e.g., Rüschemeyer et al., 2005, 2006). We suspect that the previously-found scenario would be more likely in the present study if our sample included a greater range of ASL proficiency. However, our participants were all novice ASL learners who all first began to have substantial exposure to ASL in adulthood; it is possible that in our sample the participants who engaged in more effortful processing of the ASL stimuli also may engage more effortful processing during ASL instruction, thereby leading to relatively higher proficiency. This idea would align with Leonard et al.'s (2013) finding of greater left and right IFG activity for ASL words than written or spoken English words amongst the top achievers in the ASL classes they sampled from, and the finding of Williams et al. (2016) that greater activation during an ASL phoneme categorization task in Broca's area was seen after 45 h of ASL instruction compared to before any ASL instruction. While the present and previous findings of a greater response in Broca's area postively correlating with L2 proficiency could be due to intrinsic, individual differences that support greater L2 learning success, it is more likely that these findings reflect involvement of Broca's area in the learning of syntactic rules. This explanation coincides with findings in novel languages: IFG has been found to be activated during the successful learning of syntactic rules in artificial grammars (Opitz and Friederici, 2004; Friederici et al., 2006). Bilingualism also does require a degree of cognitive control between the two languages (Kroll and Tokowicz, 2005; Abutalebi, 2008). Our relatively novice L2 ASL learners may require even greater control than a fluent bilingual, thus these Broca's area activations may reflect more effortful and less automatic processing of the second language (Friederici, 2006). As a learner becomes more proficient (beyond the 1–2 years of coursework taken by our participants), the movement between ASL and English may become more automatic or fluid, and thus the need for cognitive control may decline (Emmorey et al., 2008). A longitudinal study would be needed to track this possible rise and fall of Broca's area involvement in L2 ASL sentence processing.
Anterior Temporal Lobe
Our analyses implicate the ATL more in the perception of both L2 ASL and L1 English sentences then their respective word list comparison conditions (Figures 3A,C). These findings coincide with numerous previous functional neuroimaging studies of spoken languages in which greater ATL activation to reading and hearing sentences was seen in comparison to unstructured lists of words (Humphries et al., 2001; Vandenberghe et al., 2002; Rogalsky and Hickok, 2009; Rogalsky et al., 2011). Lesion studies also implicate the left anterior temporal lobe in sentence comprehension (in addition to left posterior temporal cortex, discussed below) (Thothathiri et al., 2012; Magnusdottir et al., 2013; Pillay et al., 2014)2. Previous studies of both deaf and hearing native ASL signers also have found the ATL to be sensitive to ASL sentence structure (Neville et al., 1997; cf. MacSweeney et al., 2006; Newman et al., 2010b).
Established sentence-processing regions of the ATL adapt to the new modality's sentence structures: the bilateral ATL ROIs defined by the contrast of English sentences–English words both were found to have a main effect of structure, with sentences activating these regions more than word lists (not surprisingly, at least for English, given the defining contrast). It is notable though that in the right ATL ROI, there was no main effect for language, and in the left ATL ROI the main effect for language was not significant but trending toward a preference for English (p = 0.07; Figures 4B,E). These findings suggest that L1 sentence-processing regions in the bilateral ATL do adapt to L2 ASL sentence structure, even in late L2 learners.
It is worth mentioning that the right ATL sentence ROI is more anterior than the left ATL sentence ROI, thus it is possible that the right ROI's response reflects language-related semantic processes in the temporal pole while the more posterior left ATL ROI's response reflects sensitivity to sentence-structure. Previous studies of both native sign and spoken languages do find ATL regions sensitive to semantic modulations (Petitto et al., 2000; MacSweeney et al., 2008; Chan et al., 2011; Mayberry et al., 2011; Visser and Lambon Ralph, 2011; Leonard et al., 2013), and these semantic effects in the ATL also are evident in L1 and L2 of spoken language bilinguals (Leonard et al., 2010, 2011). Regardless of the exact sentence-processing resource driving the right ATL's similar response to both L1 English and L2 ASL sentences, it is clear that L1 English sentence-processing resources in the ATL are adapting to the visual-spatial nature of ASL sentence structure.
Posterior Superior Temporal Lobe
The left posterior superior temporal gyrus (pSTG) is frequently implicated in sentence processing and is known to be sensitive in spoken languages to syntactic structure differences (Humphries et al., 2006; Thothathiri et al., 2012; Griffiths et al., 2013; Wilson et al., 2014) as well as semantic and prosody manipulations (Humphries et al., 2005, 2006). Numerous other studies also implicate portions of the pSTG in syllable-level phonological, semantic, and auditory-motor integration processes (see Hickok and Poeppel, 2007). Thus, it is evident from previous work in spoken languages that the pSTG is a functionally diverse region in regards to its contributions to speech processing. In the present study we found left pSTG regions to more activated by English sentences than word lists and by ASL sentences than word lists (ASL sentences also selectively engaged the right pSTG). The left pSTG ROI more activated by English sentences than words lists demonstrated a main effect for structure (both sentences >both word lists) as well as a main effect of language (English >ASL). Together these findings may reflect that a portion of the pSTG is engaged in sentence-level syntactic processing regardless of proficiency or modality, but with a preference for L1. It also is possible that our findings reflect subvocal rehearsal or translation of the ASL sentences to facilitate comprehension, thereby engaging the left pSTG, in which there are known to be subregions that are involved in phonological processing and verbal working memory (Buchsbaum et al., 2011). It also is possible that the preference for English over ASL in left pSTG may reflect a portion of this region being specific to spoken languages, in that it is most sensitive to phonological processing, coinciding with previous work implicating the pSTG more in orthographically transparent languages (i.e., more grapheme to phoneme conversions) than less transparent language (e.g., Italian vs. English; Paulesu et al., 2000). Future work is needed to explore these possibilities, but altogether our findings support the flexibility of the pSTG's sentence-processing regions to the visual-spatial syntactic structures learned during late L2 ASL, as well as likely the specificity of pSTG phonological processing regions to spoken language processing.
In our exploratory analysis of activations correlated with ASL proficiency, activation in the left posterior superior temporal gyrus to ASL sentences was associated with greater ASL proficiency. Recent neuroimaging studies of spoken L2 learning also implicate the pSTG in L2 proficiency. For example, Chai et al. (2016) resting-state fMRI study of native English-speaking adults learning French found that greater resting-state functional connectivity prior to L2 learning between the left posterior superior temporal gyrus and the anterior insula/frontal operculum (AI/FO, a region adjacent to Broca's area) was associated with eventual greater proficiency in L2 lexical retrieval. Kuhl et al. (2016)'s diffusion tensor imaging study of Spanish-English bilinguals also implicates left superior temporal/inferior parietal regions; they found that higher density and lower diffusivity of white matter pathways underlying these areas were significantly correlated with more years of L2 experience. Thus, it is possible that the pSTG activations associated with increased L2 ASL proficiency reflect greater vocabulary or lexical knowledge.
Parietal Lobe
Previous studies of early bimodal bilinguals and deaf signers have both found parietal regions activated more by sign than spoken language (Newman et al., 2002; Emmorey et al., 2005; Pa et al., 2008). The authors of these studies have suggested that their parietal findings reflect a sign language-specific resource that may be due to modality-specific organization of sensory-motor integration cortex in parietal cortex (Pa et al., 2008). Our present findings coincide with this modality-specific organization of parietal cortex: we found bilateral superior parietal and left inferior parietal / post-central gyrus regions to be more activated by L2 ASL than L1 English sentences, as well as more inferior bilateral parietal regions that show the reverse pattern (L1 English > L2 ASL sentences). Our parietal lobe results suggest that the “ASL-specific” parietal lobe involvement identified in previous studies of early bimodal bilinguals does not seem to be dependent upon age of acquisition. The increased response to ASL sentences than English sentences in the bilateral parietal lobes may be related to the increased spatial hand movements in the ASL sentences to convey grammatical information (subject, object, etc.) (Emmorey et al., 2005), compared to word lists that comparatively lack such movements. Our lack of finding of a right hemisphere parietal involvement for ASL coincides with Newman et al. (2002) findings that this neural correlate of ASL is related to an early age of acquisition.
Hemispheric Laterality of ASL Sentence Processing
One area of ongoing debate regarding the neurobiology of sign language is the right hemisphere's involvement in sign languages compared to spoken languages. There is evidence to suggest that the right hemisphere is more involved in sign languages than spoken languages, possibly because of the visual-spatial nature of sign languages, for which the right hemisphere is thought to be specialized (Bavelier et al., 1998; Newman et al., 2002; Emmorey et al., 2005). More bilateral activation has been reported during sign language comprehension and production than in spoken language (Emmorey et al., 2014). However, right hemisphere damage resulting in visual-spatial deficits does not necessarily impair sign language abilities of native deaf signers (Hickok et al., 1998a).
We found that L1 English and L2 ASL sentences both activated large bilateral fronto-temporo-parietal networks, as seen in numerous previous studies of spoken language bilingualism, mostly independent of age of acquisition or proficiency (Perani and Abutalebi, 2005; Ge et al., 2015) as well as in both spoken and signed languages in monolinguals (Hickok and Poeppel, 2000, 2004, 2007; MacSweeney et al., 2008). We did not find any evidence of a right hemisphere preference for the L2 ASL vs. L1 sentences, conflicting with some previous studies of both spoken language bilingualism and bimodal bilingualism that have found greater right hemisphere involvement for L2 compared to L1, particularly with later age of acquisition and/or lower L2 proficiency (e.g., in novice L2 learners; Dehaene et al., 1997; Meschyan and Hernandez, 2006) as in our participant sample. One possible reason for this discrepancy is task: our task was a passive viewing/reading task, whereas much of the previous work implicating ASL-specific right hemisphere involvement involved some type of active task (e.g., anomaly detection, sign/word repetition, recognition test) (Newman et al., 2002; Pa et al., 2008; Emmorey et al., 2014), which might lead to increased activation outside of the left-lateralized language network: greater effort during a variety of language tasks has been found to elicit greater right hemisphere recruitment (Just and Varma, 2002).
There was only one right hemisphere activation cluster, in the right middle frontal gyrus adjacent to the right IFG, in which activation in response to ASL sentences was correlated with ASL proficiency. This finding may reflect additional recruitment of cognitive resources being associated with greater ASL proficiency in our cohort of novice ASL learners, as the right IFG and adjacent prefrontal regions are frequently implicated in increased task-related cognitive demands and greater effort during language tasks (Just and Varma, 2002; Goghari and MacDonald, 2009; Prat et al., 2011) and in L2 vs. L1 tasks (Wartenburger et al., 2003; Reiterer et al., 2011). Our mostly null results in the right hemisphere related to proficiency should be interpreted with caution as our study was likely underpowered for correlational analyses. But our null results also may be related to the nature of our ASL stimuli: in deaf signers, Newman et al. (2010b) report greater right hemisphere involvement for ASL sentences with narrative cues (often involving head movements and facial expressions), which our ASL sentences mostly lacked. Another consideration is that stimulus-locked activation measurements may not best reflect right hemisphere contributions to ASL; functional connectivity analyses indicate that signers have greater functional connectivity between right temporal and inferior parietal regions than non-signers (Malaia et al., 2014), this may suggest more efficient involvement of the right hemisphere in signers' processing of ASL that may not be fully reflected in activation differences quantified in our subtraction analyses. It also is possible that in fact the right hemisphere is not engaged in ASL processing even as a function of proficiency because our cohort is still relatively novice learners; this would coincide with previous findings by Malaia et al. (2012) that the right hemisphere is not activated by ASL signs more than non-linguistic gestures in non-signers.
Future Directions
Our ASL sentences contained two types of syntactic manipulations, inflectional morphology (as indicated by spatial relations) and word-order. Future studies are needed to determine if these two properties differentially affect L2 ASL proficiency and the brain regions recruited during ASL sentence comprehension in novice signers. Previous work in deaf native signers suggests that ASL inflectional morphology and word-order grammatical information are supported by two distinct but overlapping networks: Newman et al. (2010b) found that in ASL inflectional morphology particularly engaged bilateral ATL, inferior frontal, and basal ganglia regions (i.e., regions known to be engaged in combinatorial processes and syntactic processing in spoken languages), while ASL sentences with critical word-order grammatical information activated regions often associated with working memory, including regions in the dorsolateral prefrontal cortex (including the inferior frontal gyrus) and inferior parietal lobe. Differentiating between inflectional morphology vs. word order contributions may also provide insight into the contributions of the medial structures including the basal ganglia and hippocampus that in the present study were found to be more engaged by ASL than English stimuli. These subcortical structures' greater involvement in ASL than English may reflect increased memory retrieval, sequencing or combinatorial processes that L2 sentence comprehension requires (Newman et al., 2010a; Leonard et al., 2011), and it is likely that the different grammatical properties recruit these resources differently.
Another potentially meaningful extension of the present study is to conduct a longitudinal study with a similar cohort. The present study collected data post-ASL instruction, thus, it is unknown how our subjects' brains differed prior to ASL exposure. Ample previous work indicates that it is unlikely that the activation patterns we found in response to ASL would exist prior to any ASL exposure, as the subjects would extract very little, if any, semantic or syntactic information from the perceived “gestures” and thus would not fully recruit language networks (Malaia et al., 2012; Leonard et al., 2013; Kovelman et al., 2014). However, there is emerging evidence from spoken L2 learning studies that the integrity of functional and structural brain networks prior to L2 learning are correlated with eventual L2 proficiency (Chai et al., 2016; Kuhl et al., 2016). Thus, a future longitudinal study, with a larger sample size to increase power of correlation analyses, and at least two pre- and post-L2 learning data points, is needed to better characterizes the neural signatures of L2 sentence comprehension and ASL proficiency.
Conclusion
The present fMRI study examined the functional neuroanatomy of ASL sentence comprehension in adult learners of ASL, particularly how L1 spoken language sentence comprehension networks respond to L2 ASL sentences. We replicate previous work in native signers as well as in spoken language late L2 learners, in that there is a high degree of overlap in the functional neuroanatomy of L1 and L2 sentence comprehension, despite L1 and L2 differences in modality and proficiency. Our results, with previous early bimodal and late spoken language bilingual work providing context, indicate that L1 spoken language sentence-processing regions can adapt to support syntactic structures in a different modality beyond the critical language periods in childhood. We find that within the L1 sentence processing network, Broca's area, the left anterior temporal lobe, and the left posterior superior temporal gyrus each respond differently to L2 ASL sentences. L1 sentence resources in Broca's area were significantly more responsive to L2 sentences, whereas left ATL regions exhibited no significant difference between L1 and L2 sentences, and posterior superior temporal regions exhibited greater preference for the L1 than L2 sentences. We suggest that the engagement of Broca's area to sentence processing may be related to increased cognitive demands associated with the L2 sentences, whereas the responses of L1 sentence regions in the ATL and pSTG to L2 sentences reflect L1 syntactic or combinatorial semantic processes being recruited for L2 sign comprehension. We also found that ASL sentence comprehension proficiency in late L2 learners may be correlated with increased activation in L1 sentence regions and decreased activation in visual-spatial regions in response to ASL sentences, but the nature and origin of these proficiency-related functional differences requires further study.
Author Contributions
LJ, YY, SM, MF, PH, and CR contributed to the design of the study. LJ, YY, SM, MF, LB, and CR were critical in implementing and analyzing the data. All authors contributed to data interpretation and manuscript preparation.
Funding
This work was funded by Arizona State University (ASU) and a Barrett Honors College at ASU Honors Thesis Grant (SM).
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
The authors would like to thank the American Sign Language program in Arizona State University's Department of Speech and Hearing Science, particularly ASL program Director Paul Quinn. Their support regarding recruitment, stimuli development, and the overall project was invaluable. We also would like to thank Tyler Berdy for lending us his expertise as a native ASL signer. Finally we would like to express our gratitude to Prof. Tamiko Azuma (Arizona State University), whose Advanced Research Experience Seminar provided LJ with feedback and guidance on some portions of the present study. SM's honors thesis reported on preliminary findings from this study.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01626/full#supplementary-material
Footnotes
1. ^It is noteworthy though that sign abilities and visuospatial abilities have been found to dissociate in some native deaf signers following a stroke (Hickok et al., 1998b), but it is unclear if this dissociation would hold in late L2 learners of ASL.
2. ^The ATL also is regularly implicated in multi-modal semantic processing (Hodges et al., 1992, 1999; Mummery et al., 2000; Simmons and Martin, 2009), although the activation foci are often anterior to those sensitive to sentence structure (Rogalsky, 2016).
References
Abutalebi, J. (2008). Neural aspects of second language representation and language control. Acta Psychol. 128, 466–478. doi: 10.1016/j.actpsy.2008.03.014
Abutalebi, J., Cappa, S. F., and Perani, D. (2001). The bilingual brain as revealed by functional neuroimaging. Biling. Lang. Cogn. 4, 179–190. doi: 10.1017/S136672890100027X
Adolphs, R., Damasio, H., and Tranel, D. (2002). Neural systems for recognition of emotional prosody: a 3-D lesion study. Emotion 2, 23–51. doi: 10.1037/1528-3542.2.1.23
Baldo, J. V., Shimamura, A. P., Delis, D. C., Kramer, J., and Kaplan, E. (2001). Verbal and design fluency in patients with frontal lobe lesions. J. Int. Neuropsychol. Soc. 7, 586–596. doi: 10.1017/S1355617701755063
Bavelier, D., Corina, D., Jezzard, P., Clark, V., Karni, A., Lalwani, A., et al. (1998). Hemispheric specialization for English and ASL: left invariance-right variability. Neuroreport 9, 1537–1542. doi: 10.1097/00001756-199805110-00054
Bemis, D. K., and Pylkkänen, L. (2013). Basic linguistic composition recruits the left anterior temporal lobe and left angular gyrus during both listening and reading. Cereb. Cortex 23, 1859–1873. doi: 10.1093/cercor/bhs170
Blank, I., Balewski, Z., Mahowald, K., and Fedorenko, E. (2016). Syntactic processing is distributed across the language system. Neuroimage 127, 307–323. doi: 10.1016/j.neuroimage.2015.11.069
Buchsbaum, B. R., Baldo, J., Okada, K., Berman, K. F., Dronkers, N., D'Esposito, M., et al. (2011). Conduction aphasia, sensory-motor integration, and phonological short-term memory–an aggregate analysis of lesion and fMRI data. Brain Lang. 119, 119–128. doi: 10.1016/j.bandl.2010.12.001
Caffarra, S., Molinaro, N., Davidson, D., and Carreiras, M. (2015). Second language syntactic processing revealed through event-related potentials: an empirical review. Neurosci. Biobehav. Rev. 51, 31–47. doi: 10.1016/j.neubiorev.2015.01.010
Chai, X. J., Berken, J. A., Barbeau, E. B., Soles, J., Chen, J. K., and Klein, D. (2016). Intrinsic functional connectivity in the adult brain and success in second-language learning. J. Neurosci. 36, 755–761. doi: 10.1523/JNEUROSCI.2234-15.2016
Chan, A. M., Baker, J. M., Eskandar, E., Schomer, D., Ulbert, I., Marinkovic, K., et al. (2011). First-pass selectivity for semantic categories in human anteroventral temporal lobe. J. Neurosci. 32, 9700–9705. doi: 10.1523/JNEUROSCI.3122-11.2011
Chen, L., Shu, H., Liu, J., Zhao, J., and Li, P. (2007). ERP signatures of subject-verb agreement in L2 learning. Biling. Lang. Cogn. 10, 161–174. doi: 10.1017/S136672890700291X
Corina, D. P., and McBurney, S. L. (2001). The neural representation of language in users of American sign language. J. Commun. Disord. 34, 455–471. doi: 10.1016/S0021-9924(01)00063-6
Damasio, H., Tranel, D., Grabowski, T., Adolphs, R., and Damasio, A. (2004). Neural systems behind word and concept retrieval. Cognition 92, 179–229. doi: 10.1016/j.cognition.2002.07.001
Dapretto, M., and Bookheimer, S. Y. (1999). Form and content: dissociating syntax and semantics in sentence comprehension. Neuron 24, 292–293. doi: 10.1016/S0896-6273(00)80855-7
Dehaene, S., Dupoux, E., Mehler, J., Cohen, L., Paulesu, E., Perani, D., et al. (1997). Anatomical variability in the cortical representation of first and second language. Neuroreport 8, 3809–3815. doi: 10.1097/00001756-199712010-00030
Dronkers, N. F., Wilkins, D. P., Van Valin, R. D. Jr., Redfern, B. B., and Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language composition. Cognition 92, 145–177. doi: 10.1016/j.cognition.2003.11.002
Emmorey, K. (2002). Language, Cognition, and the Brain: Insights From Sign Language Research. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Emmorey, K., Damasio, H., McCullough, S., Grabowski, T., Ponto, L. L., Hichwa, R. D., et al. (2002). Neural systems underlying spatial language in American Sign Language. Neuroimage 17, 812–824. doi: 10.1006/nimg.2002.1187
Emmorey, K., Giezen, M. R., and Gollan, T. H. (2016). Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Biling. Lang. Cogn. 19, 223–242. doi: 10.1017/S1366728915000085
Emmorey, K., Grabowski, T., McCullough, S., Damasio, H., Ponto, L. L., Hichwa, R. D., et al. (2003). Neural systems underlying lexical retrieval for sign language. Neuropsychologia 41, 85–95. doi: 10.1016/S0028-3932(02)00089-1
Emmorey, K., Grabowski, T., McCullough, S., Ponto, L. L., Hichwa, R. D., and Damasio, H. (2005). The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. Neuroimage 24, 832–840. doi: 10.1016/j.neuroimage.2004.10.008
Emmorey, K., Luk, G., Pyers, J. E., and Bialystok, E. (2008). The source of enhanced cognitive control in bilinguals: evidence from bimodal bilinguals. Psychol. Sci. 19, 1201–1206. doi: 10.1111/j.1467-9280.2008.02224.x
Emmorey, K., and McCullough, S. (2009). The bimodal bilingual brain: effects of sign language experience. Brain Lang. 109, 124–132. doi: 10.1016/j.bandl.2008.03.005
Emmorey, K., McCullough, S., Mehta, S., and Grabowski, T. J. (2014). How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Front. Psychol. 5:484. doi: 10.3389/fpsyg.2014.00484
Emmorey, K., Mehta, S., and Grabowski, T. J. (2007). The neural correlates of sign versus word production. Neuroimage 36, 202–208. doi: 10.1016/j.neuroimage.2007.02.040
Fedorenko, E., Behr, M. K., and Kanwisher, N. (2011). Functional specificity for high-level lingustic processing in the human brain. Proc. Natl. Acad. Sci. U.S.A. 108, 16428–16433. doi: 10.1073/pnas.1112937108
Fedorenko, E., Duncan, J., and Kanwisher, N. (2012a). Language-selective and domain-general regions lie side by side within Broca's area. Curr. Biol. 22, 2059–2062. doi: 10.1016/j.cub.2012.09.011
Fedorenko, E., Hsieh, P. J., Nieto-Castañón, A., Whitfield-Gabrieli, S., and Kanwisher, N. (2010). New method for fMRI investigations of language: defining ROIs functionally in individual subjects. J. Neurophysiol. 104, 1177–1194. doi: 10.1152/jn.00032.2010
Fedorenko, E., and Kanwisher, N. (2011). Some regions within Broca's area do respond more strongly to sentences than to linguistically degraded stimuli: a comment on Rogalsky and Hickok. J. Cogn. Neurosci. 23, 2632–2635. doi: 10.1162/jocn_a_00043
Fedorenko, E., Nieto-Castañon, A., and Kanwisher, N. (2012b). Lexical and syntactic representations in the brain: an fMRI investigation with multi-voxel pattern analyses. Neuropsychologia 50, 499–513. doi: 10.1016/j.neuropsychologia.2011.09.014
Fedorenko, E., Nieto-Castañón, A., and Kanwisher, N. (2012c). Syntactic processing in the human brain: what we know, what we don't know and a suggestion for how to proceed. Brain Lang. 120, 187–207. doi: 10.1016/j.bandl.2011.01.001
Ferstl, E. C., Rinck, M., and von Cramon, D. Y. (2005). Emotional and temporal aspects of situation model processing during text comprehension: an event-related fMRI study. J. Cogn. Neurosci. 17, 724–739. doi: 10.1162/0898929053747658
Friederici, A. D. (2006). What's in control of language? Nat. Neurosci. 9, 991–992. doi: 10.1038/nn0806-991
Friederici, A. D. (2009). Pathways to language: fiber tracts in the human brain. Trends Cogn. Sci. 13, 175–181. doi: 10.1016/j.tics.2009.01.001
Friederici, A. D., Bahlmann, J., Heim, S., Schubotz, R. I., and Anwander, A. (2006). The brain differentiates human and non-human grammars: functional localization and structural connectivity. Proc. Natl. Acad. Sci. U.S.A. 103, 2458–2463. doi: 10.1073/pnas.0509389103
Friederici, A. D., Rüschemeyer, S. A., Hahne, A., and Fiebach, C. J. (2003). The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. Cereb. Cortex 13, 170–177. doi: 10.1093/cercor/13.2.170
Friston, K. J., Jezzard, P., and Turner, R. (1994). Analysis of functional MRI time-series. Hum. Brain Mapp. 1, 153–171. doi: 10.1002/hbm.460010207
Gan, G., Büchel, C., and Isel, F. (2013). Effect of language task demands on the neural response during lexical access: a functional magnetic resonance imaging study. Brain Behav. 3, 402–416. doi: 10.1002/brb3.133
Gazzaniga, M. S. (2000). Cerebral specialization and interhemispheric communication: does the corpus callosum enable the human condition?. Brain 123, 1293–1326. doi: 10.1093/brain/123.7.1293
Ge, J., Peng, G., Lyu, B., Wang, Y., Zhuo, Y., and Gao, J. H. (2015). Cross-language differences in the brain network subserving intelligible speech. Proc. Natl. Acad. Sci. U.S.A. 112, 2972–2977. doi: 10.1073/pnas.1416000112
Glosser, G., and Goodglass, H. (1990). Disorders in executive control functions among aphasic and other brain-injured patients. J. Clin. Exp. Neuropsychol. 12, 485–501. doi: 10.1080/01688639008400995
Goghari, V. M., and MacDonald, A. W. III (2009). The neural basis of cognitive control: response selection and inhibition. Brain Cogn. 71, 72–83. doi: 10.1016/j.bandc.2009.04.004
Graves, W. W., Grabowski, T. J., Mehta, S., and Gupta, P. (2008). The left posterior superior temporal gyrus participates specifically in accessing lexical phonology. J. Cogn. Neurosci. 20, 1698–1710. doi: 10.1162/jocn.2008.20113
Griffiths, J. D., Marslen-Wilson, W. D., Stamatakis, E. A., and Tyler, L. K. (2013). Functional organization of the neural language system: dorsal and ventral pathways are critical for syntax. Cereb. Cortex 23, 139–147. doi: 10.1093/cercor/bhr386
Grodzinsky, Y. (2000). The neurology of syntax: language use without Broca's area. Behav. Brain Sci. 23, 1–21. doi: 10.1017/S0140525X00002399
Grodzinsky, Y., and Santi, A. (2008). The battle for Broca's region. Trends Cogn. Sci. 12, 474–480. doi: 10.1016/j.tics.2008.09.001
Hernandez, A. E., Hofmann, J., and Kotz, S. A. (2007). Age of acquisition modulates neural activity for both regular and irregular syntactic functions. Neuroimage 36, 912–923. doi: 10.1016/j.neuroimage.2007.02.055
Herrmann, B., Obleser, J., Kalberlah, C., Haynes, J. D., and Friederici, A. D. (2012). Dissociable neural imprints of perception and grammar in auditory functional imaging. Hum. Brain Mapp. 33, 584–595. doi: 10.1002/hbm.21235
Hickok, G., Bellugi, U., and Klima, E. S. (1998a). The neural organization of language: evidence from sign language aphasia. Trends Cogn. Sci. 2, 129–136. doi: 10.1016/S1364-6613(98)01154-1
Hickok, G., Kirk, K., and Bellugi, U. (1998b). Hemispheric organization of local-and global-level visuospatial processes in deaf signers and its relation to sign language aphasia. Brain Lang. 65, 276–286. doi: 10.1006/brln.1998.1990
Hickok, G., and Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends Cogn. Sci. 4, 131–138. doi: 10.1016/S1364-6613(00)01463-7
Hickok, G., and Poeppel, D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92, 67–99. doi: 10.1016/j.cognition.2003.10.011
Hickok, G., and Poeppel, D. (2007). The cortical organization of speech processing. Nat. Rev. Neurosci. 8, 393–402. doi: 10.1038/nrn2113
Hodges, J. R., Patterson, K., Oxbury, S., and Funnell, E. (1992). Semantic dementia. Progressive fluent aphasia with temporal lobe atrophy. Brain J. Neurol. 115, 1783–1806. doi: 10.1093/brain/115.6.1783
Hodges, J. R., Patterson, K., Ward, R., Garrard, P., Bak, T., Perry, R., et al. (1999). The differentiation of semantic dementia and frontal lobe dementia (temporal and frontal variants of frontotemporal dementia) from early Alzheimer's disease: a comparative neuropsychological study. Neuropsychology 13, 31–40.
Humphries, C., Binder, J. R., Medler, D. A., and Liebenthal, E. (2006). Syntactic and semantic modulation of neural activity during auditory sentence comprehension. J. Cogn. Neurosci. 18, 665–679. doi: 10.1162/jocn.2006.18.4.665
Humphries, C., Love, T., Swinney, D., and Hickok, G. (2005). Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing. Hum. Brain Mapp. 26, 128–138. doi: 10.1002/hbm.20148
Humphries, C., Willard, K., Buchsbaum, B., and Hickok, G. (2001). Role of anterior temporal cortex in auditory sentence comprehension: an fMRI study. Neuroreport 12, 1749–1752. doi: 10.1097/00001756-200106130-00046
Jeong, H., Sugiura, M., Sassa, Y., Haji, T., Usui, N., Taira, M., et al. (2007). Effect of syntactic similarity on cortical activation during second language processing: a comparison of English and Japanese among native Korean trilinguals. Hum. Brain Mapp. 28, 194–204. doi: 10.1002/hbm.20269
Johnstone, T., van Reekum, C. M., Oakes, T. R., and Davidson, R. J. (2006). The voice of emotion: an fMRI study neural responses to angry and happy vocal expressions. Soc. Cogn. Affect. Neurosci. 1, 242–249. doi: 10.1093/scan/nsl027
Just, M. A., Carpenter, P. A., Keller, T. A., Eddy, W. F., and Thulbom, K. R. (1996). Brain activation modulated by sentence comprehension. Science 274, 114–116. doi: 10.1126/science.274.5284.114
Just, M. A., and Varma, S. (2002). A hybrid architecture for working memory: reply to MacDonald and Christiansen. Psychol. Rev. 109, 55–65 doi: 10.1037/0033-295X.109.1.55
Kaan, E., and Swaab, T. Y. (2002). The brain circuitry of syntactic comprehension. Trends Cogn. Sci. 6, 350–356. doi: 10.1016/S1364-6613(02)01947-2
Kotz, S. A. (2009). A critical review of ERP and fMRI evidence on L2 syntactic processing. Brain Lang. 109 , 68–74. doi: 10.1016/j.bandl.2008.06.002
Kovelman, I., Shalinsky, M. H., Berens, M. S., and Petitto, L. A. (2014). Words in the bilingual brain: an fNIRS brain imaging investigation of lexical processing in sign-speech bimodal bilinguals. Front. Hum. Neurosci. 8:606. doi: 10.3389/fnhum.2014.00606
Kroll, F. J., and Tokowicz, N. (2005). Models of Bilingual Representation and Processing. Handbook of bilingualism (O. University Press), CH. 26, 531–548.
Kuhl, P. K., Stevenson, J., Corrigan, N. M., van den Bosch, J. J. F., Can, D. D., and Richards, T. (2016). Neuroimaging of the bilingual brain: Structural brain correlates of listening and speaking in a second language. Brain Lang. 162, 1–9. doi: 10.1016/j.bandl.2016.07.004
Leonard, M. K., Brown, T. T., Travis, K. E., Gharapetian, L., Hagler, D. J., Dale, A. M., et al. (2010). Spatiotemporal dynamics of bilingual word processing. Neuroimage 49, 3286–3294. doi: 10.1016/j.neuroimage.2009.12.009
Leonard, M. K., Ferjan Ramirez, N., Torres, C., Hatrak, M., Mayberry, R. I., and Halgren, E. (2013). Neural stages of spoken, written, and signed word processing in beginning second language learners. Front. Hum. Neurosci. 7:322. doi: 10.3389/fnhum.2013.00322
Leonard, M. K., Torres, C., Travis, K. E., Brown, T. T., Hagler, D. J. Jr., Dale, A. M., et al. (2011). Language proficiency modulates the recruitment of non-classical language areas in bilinguals. PLoS ONE 6:e18240. doi: 10.1371/journal.pone.0018240
Mack, J. E., Meltzer-Asscher, A., Barbieri, E., and Thompson, C. K. (2013). Neural correlates of processing passive sentences. Brain Sci. 3, 1198–1214. doi: 10.3390/brainsci3031198
MacSweeney, M., Campbell, R., Woll, B., Brammer, M. J., Giampietro, V., David, A. S., et al. (2006). Lexical and sentential processing in British Sign Language. Hum. Brain Mapp. 27, 63–76. doi: 10.1002/hbm.20167
MacSweeney, M., Capek, C. M., Campbell, R., and Woll, B. (2008). The signing brain: the neurobiology of sign language. Trends Cogn. Sci. 12, 432–440. doi: 10.1016/j.tics.2008.07.010
MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C., et al. (2002). Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain 125, 1583–1593. doi: 10.1093/brain/awf153
Magnusdottir, S., Fillmore, P., den Ouden, D. B., Hjaltason, H., Rorden, C., Kjartansson, O., et al. (2013). Damage to left anterior temporal cortex predicts impairment of complex syntactic processing: a lesion-symptom mapping study. Hum. Brain Mapp. 34, 2715–2723. doi: 10.1002/hbm.22096
Makuuchi, M., Bahlmann, J., and Friederici, A. D. (2012). An approach to separating the levels of hierarchical structure building in language and mathematics. Phil. Trans. R. Soc. B Biol. Sci. 367, 2033–2045. doi: 10.1098/rstb.2012.0095
Malaia, E., Ranaweera, R., Wilbur, R. B., and Talavage, T. M. (2012). Event segmentation in a visual language: neural bases of processing American Sign Language predicates. Neuroimage 59, 4094–4101. doi: 10.1016/j.neuroimage.2011.10.034
Malaia, E., Talavage, T. M., and Wilbur, R. B. (2014). Functional connectivity in task-negative network of the Deaf: effects of sign language experience. PeerJ 2:e446. doi: 10.7717/peerj.446
Mayberry, R. I., Chen, J. K., Witcher, P., and Klein, D. (2011). Age of acquisition effects on the functional organization of language in the adult brain. Brain Lang. 119, 16–29. doi: 10.1016/j.bandl.2011.05.007
Meschyan, G., and Hernandez, A. E. (2006). Impact of language proficiency and orthographic transparency on bilingual word reading: an fMRI investigation. Neuroimage 29, 1135–1140. doi: 10.1016/j.neuroimage.2005.08.055
Mosidze, V. M., Mkheidze, R. A., and Makashvili, M. A. (1994). Disorders of visuo-spatial attention in patients with unilateral brain damage. Behav. Brain Res. 65, 121–122. doi: 10.1016/0166-4328(94)90081-7
Mummery, C. J., Patterson, K., Price, C. J., Ashburner, J., Frackowiak, R. S., and Hodges, J. R. (2000). A voxel-based morphometry study of semantic dementia: relationship between temporal lobe atrophy and semantic memory. Ann. Neurol. 47, 36–45. doi: 10.1002/1531-8249(200001)47:1<36::AID-ANA8>3.0.CO;2-L
Neville, H. J., Coffey, S. A., Lawson, D. S., Fischer, A., Emmorey, K., and Bellugi, U. (1997). Neural systems mediating American sign language: effects of sensory experience and age of acquisition. Brain Lang. 57, 285–308. doi: 10.1006/brln.1997.1739
Newman, A. J., Bavelier, D., Corina, D., Jezzard, P., and Neville, H. J. (2002). A critical period for right hemisphere recruitment in American Sign Language processing. Nat. Neurosci. 5, 76–80. doi: 10.1038/nn775
Newman, A. J., Supalla, T., Hauser, P., Newport, E. L., and Bavelier, D. (2010a). Dissociating neural subsystems for grammar by contrasting word order and inflection. Proc. Natl. Acad. Sci. U.S.A. 107, 7539–7544. doi: 10.1073/pnas.1003174107
Newman, A. J., Supalla, T., Hauser, P. C., Newport, E. L., and Bavelier, D. (2010b). Prosodic and narrative processing in American Sign Language: an fMRI study. Neuroimage 52, 669–676. doi: 10.1016/j.neuroimage.2010.03.055
Nichols, T. E., and Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: a primer with examples. Hum. Brain Mapp. 15, 1–25. doi: 10.1002/hbm.1058
Novick, J. M., Trueswell, J. C., and Thompson-Schill, S. L. (2005). Cognitive control and parsing: reexamining the role of Broca's area in sentence comprehension. Cogn. Affect. Behav. Neurosci. 5, 263–281. doi: 10.3758/CABN.5.3.263
Ojima, S., Nakata, H., and Kakigi, R. (2005). An ERP study of second language learning after childhood: effects of proficiency. J. Cogn. Neurosci. 17, 1212–1228. doi: 10.1162/0898929055002436
Opitz, B., and Friederici, A. D. (2004). Brain correlates of language learning: the neuronal dissociation of rule-based versus similarity-based learning. J. Neurosci. 24, 8436–8440. doi: 10.1523/JNEUROSCI.2220-04.2004
Pa, J., Wilson, S. M., Pickell, H., Bellugi, U., and Hickok, G. (2008). Neural organization of linguistic short-term memory is sensory modality–dependent: evidence from signed and spoken language. J. Cogn. Neurosci. 20, 2198–2210. doi: 10.1162/jocn.2008.20154
Pallier, C., Devauchelle, A., and Dehaene, S. (2011). Cortical representations of the constituent structure of sentences. Proc. Natl. Acad. Sci. U.S.A. 108, 2522–2527. doi: 10.1073/pnas.1018711108
Paulesu, E., McCrory, E., Fazio, F., Menoncello, L., Brunswick, N., Cappa, S. F., et al. (2000). A cultural effect on brain function. Nat. Neurosci. 3, 91–96. doi: 10.1038/71163
Perani, D., and Abutalebi, J. (2005). The neural basis of first and second language processing. Curr. Opin. Neurobiol. 15, 202–206. doi: 10.1016/j.conb.2005.03.007
Perani, D., Paulesu, E., Galles, N. S., Dupoux, E., Dehaene, S., Bettinardi, V., et al. (1998). The bilingual brain: proficiency and age of acquisition of the second language. Brain 121, 1841–1852. doi: 10.1093/brain/121.10.1841
Petitto, L. A., Zatorre, R. J., Gauna, K., Nikelski, E. J., Dostie, D., and Evans, A. C. (2000). Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proc. Natl. Acad. Sci. U.S.A. 97, 13961–13966. doi: 10.1073/pnas.97.25.13961
Pettigrew, C., and Hillis, A. E. (2014). Role for memory capacity in sentence comprehension: evidence from acute stroke. Aphasiology 28, 1258–1280. doi: 10.1080/02687038.2014.919436
Phillips, M. L., Young, A. W., Scott, S. K., Calder, A. J., Andrew, C., Giampietro, V., et al. (1998). Neural responses to facial and vocal expressions of fear and disgust. Proc. R. Soc. B Biol. Sci. 265, 1809–1817.
Pillay, S. B., Stengel, B. C., Humphries, C., Book, D. S., and Binder, J. R. (2014). Cerebral localization of impaired phonological retrieval during rhyme judgment. Ann. Neurol. 76, 738–746. doi: 10.1002/ana.24266
Prat, C. S., and Just, M. A. (2010). Exploring the neural dynamics underpinning individual differences in sentence comprehension. Cereb. Cortex 21, 1747–1760. doi: 10.1093/cercor/bhq241
Prat, C. S., Mason, R. A., and Just, M. A. (2011). Individual differences in the neural basis of causal inferencing. Brain Lang. 116, 1–13. doi: 10.1016/j.bandl.2010.08.004
Reiterer, S. M., Hu, X., Erb, M., Rota, G., Nardo, D., Grodd, W., et al. (2011). Individual differences in audio-vocal speech imitation aptitude in late bilinguals: functional neuro-imaging and brain morphology. Front. Psychol. 2:271. doi: 10.3389/fpsyg.2011.00271
Rogalsky, C. (2016). “The role of the anterior temporal lobe in sentence processing,” in Neurobiology of Language, eds G. Hickok and S. Small (London: Elsevier), 587–592.
Rogalsky, C., and Hickok, G. (2009). Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex. Cereb. Cortex 19, 786–796. doi: 10.1093/cercor/bhn126
Rogalsky, C., Matchin, W., and Hickok, G. (2008). Broca's area, sentence comprehension, and working memory: an fMRI study. Front. Hum. Neurosci. 2:14. doi: 10.3389/neuro.09.014.2008
Rogalsky, C., Poppa, T., Chen, K. H., Anderson, S. W., Damasio, H., Love, T., et al. (2015). Speech repetition as a window on the neurobiology of auditory–motor integration for speech: a voxel-based lesion symptom mapping study. Neuropsychologia 71, 18–27. doi: 10.1016/j.neuropsychologia.2015.03.012
Rogalsky, C., Raphel, K., Tomkovicz, V., O'Grady, L., Damasio, H., Bellugi, U., et al. (2013). Neural basis of action understanding: evidence from sign language aphasia. Aphasiology 27, 1147–1158. doi: 10.1080/02687038.2013.812779
Rogalsky, C., Rong, F., Saberi, K., and Hickok, G. (2011). Functional anatomy of language and music perception: temporal and structural factors investigated using functional magnetic resonance imaging. J. Neurosci. 31, 3843–3852. doi: 10.1523/JNEUROSCI.4515-10.2011
Rossi, S., Gugler, M. F., Hahne, A., and Friederici, A. D. (2006). The impact of proficiency on syntactic and second-language processing of German and Italian: evidence from event-related potentials. J. Cogn. Neurosci. 18, 2030–2048. doi: 10.1162/jocn.2006.18.12.2030
Rüschemeyer, S. A., Fiebach, C. J., Kempe, V., and Friederici, A. D. (2005). Processing lexical semantic and syntactic information in first and second language: fMRI evidence from German and Russian. Hum. Brain Mapp. 25, 266–286. doi: 10.1002/hbm.20098
Rüschemeyer, S. A., Zysset, S., and Friederici, A. D. (2006). Native and non-native reading of sentences: an fMRI experiment. Neuroimage 31, 354–365. doi: 10.1016/j.neuroimage.2005.11.047
Sakai, K. L., Tatsuno, Y., Suzuki, K., Kimura, H., and Ichida, Y. (2005). Sign and speech: amodal commonality in left hemisphere dominance for comprehension of sentences. Brain 128, 1407–1417. doi: 10.1093/brain/awh465
Sandler, W., and Lillo-Martin, D. (2006). Sign Language and Linguistic Universals. Cambridge, UK: Cambridge University Press. doi: 10.1017/CBO9781139163910
Sebastian, R., Laird, A. R., and Kiran, S. (2011). Meta-analysis of the neural representation of first language and second language. Appl. Psycholing. 32, 799–819. doi: 10.1017/S0142716411000075
Simmons, W. K., and Martin, A. (2009). The anterior temporal lobes and the functional architecture of semantic memory. J. Int. Neuropsychol. Soc. 15, 645–649. doi: 10.1017/S1355617709990348
Tanner, D., McLaughlin, J., Herschensohn, J., and Osterhout, L. (2013). Individual differences reveal stages of L2 grammatical acquisition: ERP evidence. Bilingualism, 16, 367–382. doi: 10.1017/S1366728912000302
Thothathiri, M., Kimberg, D. Y., and Schwartz, M. F. (2012). The neural basis of reversible sentence comprehension: evidence from voxel-based lesion symptom mapping in aphasia. J. Cogn. Neurosci. 24, 212–222. doi: 10.1162/jocn_a_00118
Vandenberghe, R., Nobre, A. C., and Price, C. J. (2002). The response of left temporal cortex to sentences. J. Cogn. Neurosci. 14, 550–560. doi: 10.1162/08989290260045800
Visser, M., and Lambon Ralph, M. A. (2011). Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. J. Cogn. Neurosci. 23, 3121–3131. doi: 10.1162/jocn_a_00007
Wartenburger, I., Heekeren, H. R., Abutalebi, J., Cappa, S. F., Villringer, A., and Perani, D. (2003). Early setting of grammatical processing in the bilingual brain. Neuron 37, 159–170. doi: 10.1016/S0896-6273(02)01150-9
Weber-Fox, C. M., and Neville, H. M. (1996). Maturational constraints on functional specialization for language processing: ERP and behavioral evidence in bilingual speakers. J. Cogn. Neurosci. 8, 231–256. doi: 10.1162/jocn.1996.8.3.231
Wei, M., Joshi, A. A., Zhang, M., Mei, L., Manis, F. R., He, Q., et al. (2015). How age of acquisition influences brain architecture in bilinguals. J. Neuroling. 36, 35–55. doi: 10.1016/j.jneuroling.2015.05.001
Williams, J. T., Darcy, I., and Newman, S. D. (2016). Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: a longitudinal study. Cortex 75, 56–67. doi: 10.1016/j.cortex.2015.11.015
Wilson, S. M., DeMarco, A. T., Henry, M. L., Gesierich, B., Babiak, M., Mandelli, M. L., et al. (2014). What role does the anterior temporal lobe play in sentence-level processing? neural correlates of syntactic processing in semantic variant primary progressive aphasia. J. Cogn. Neurosci. 26, 970–985. doi: 10.1162/jocn_a_00550
Wong, C., and Gallate, J. (2012). The function of the anterior temporal lobe: a review of the empirical evidence. Brain Res. 1449, 94–116. doi: 10.1016/j.brainres.2012.02.017
Yarkoni, T., and Braver, T. S. (2010). “Cognitive neuroscience approaches to individual differences in working memory and executive control: conceptual and methodological issues,” in Handbook of Individual Differences in Cognition, eds Gruska, Matthews, and Szymure (New York, NY: Springer Verlag).
Keywords: sentence comprehension, sign language, fMRI, bilingual, language
Citation: Johnson L, Fitzhugh MC, Yi Y, Mickelsen S, Baxter LC, Howard P and Rogalsky C (2018) Functional Neuroanatomy of Second Language Sentence Comprehension: An fMRI Study of Late Learners of American Sign Language. Front. Psychol. 9:1626. doi: 10.3389/fpsyg.2018.01626
Received: 13 March 2018; Accepted: 14 August 2018;
Published: 06 September 2018.
Edited by:
Narly Golestani, Université de Genève, SwitzerlandReviewed by:
Evie Malaia, Freiburger Institut für Höhere Studien (FRIAS), Albert-Ludwigs-Universität Freiburg, GermanyJed A. Meltzer, Baycrest Hospital, Canada
Kaja Jasinska, Haskins Laboratories, United States
Copyright © 2018 Johnson, Fitzhugh, Yi, Mickelsen, Baxter, Howard and Rogalsky. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Corianne Rogalsky, Y29yaWFubmUucm9nYWxza3lAYXN1LmVkdQ==