Skip to main content

PERSPECTIVE article

Front. Lang. Sci., 01 November 2023
Sec. Neurobiology of Language
This article is part of the Research Topic Syntax, the brain, and linguistic theory: a critical reassessment View all 11 articles

Three conceptual clarifications about syntax and the brain

  • 1Language and Computation in Neural Systems Group, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
  • 2Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
  • 3Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Linguistic theories offer empirical hypotheses about the architecture of human language, which provide the basis for neurobiological investigations into the study of language use. Unfortunately, progress in linking the two fields of inquiry is hampered because core concepts and ideas from linguistics are not seldom misunderstood, making them controversial and seemingly irrelevant to the neurobiology of language. Here we identify three such proposals: the distinction between competence and performance, the autonomy of syntax, and the abstract nature of syntactic representations. In our view, confusion about these concepts stems from the fact that they are interpreted at a level of analysis different from the level at which they were originally described. We clarify the intended interpretation of these concepts and discuss how they might be contextualized in the cognitive neuroscience of language. By doing so, the discussion about the integration of linguistics and neurobiology of language can move toward a fruitful exploration of linking hypotheses within a multi-level theory of syntax in the brain.

1. Introduction

Despite obvious differences in the types of research questions, methodologies and data, both linguistics and the neurobiology of language are concerned with the same object of inquiry: the nature of the human language faculty. Ideally, they should constrain each other and come to a mutual understanding of the fundamental properties of human language. A possible reason why true mutual understanding does not arise very often might be that certain core proposals put forward in linguistics are frequently misunderstood and therefore prematurely rejected in the neurobiology of language. In this paper, we discuss three examples, concerning the longstanding distinction between competence and performance, the computational autonomy of syntax, and the abstract nature of syntactic representations. We propose that mutual understanding between linguistics and neurobiology of language requires evaluating linguistic concepts and proposals at the proper level of analysis (see Poeppel et al., 2008 for a similar perspective, addressing the problem of speech perception). As such, we suggest that an integrated, multi-level theory of syntax can help ground the neurobiology of language in linguistic theorizing.

2. Levels of analysis

Marr (1982) famously argued that a complete description of any information-processing system involves three levels of analysis: the computational, the algorithmic, and the implementational level. The computational level is concerned with the nature of the problem being solved: what is computed, what the goals of the computations are, and what the constraints of the proposed solution are. The algorithmic level is a description of the actual processes required to solve the problem, which are defined in terms of the input and output representations and the algorithms for mapping input to output. Last, the implementational level specifies the hardware in which these processes are realized physically, for instance, in neural tissue. When proposing this tripartite framework, Marr (1982, p. 25) remarked that the “three levels are coupled, but only loosely”. By this he meant that while there must be some connection between the different levels of analysis, it is not expected that the properties of any of the three levels map onto the other levels in a transparent manner.

Theoretical and neurobiological models of language both aim to explain how language is instantiated in the mind/brain, but they do so at different levels of analysis. Linguistic theories are formulated at the computational level, psycholinguistic theories of language processing are algorithmic-level theories of how linguistic knowledge is put to use, and neurobiological theories of language processing are defined at the implementational and (to some extent) algorithmic levels of analysis. Core aspects of the three proposals mentioned above—the notion of competence, autonomous syntax, and syntactic representations—are part of the computational-level theory of language. Following Marr's remark, this means that they have no straightforward implications for the neurobiological implementation of the linguistic system (Marantz, 2005; Grimaldi, 2012; Sprouse and Lau, 2013; Embick and Poeppel, 2015; Johnson, 2017). It seems to us, however, that they are nevertheless frequently understood as describing the implementational level. This assumption might be based on ontological commitments about the relationship between brain and behavior—including the localizability of cognitive functions and one-to-one mappings between cognitive functions and neural mechanisms—that are most likely incorrect and must be reconsidered in light of current neuroscientific evidence (Mehler et al., 1984; Westlin et al., 2023). A consequence of this implementational interpretation of computational-level ideas is that linguistic proposals are falsified, rejected or dismissed as not psychologically or neurobiologically “real”. In our view, this state of affairs is problematic. Here, we will therefore clarify the intended meaning of the three ideas and discuss how they might be properly interpreted in the context of the neurobiology of language. The apparently paradoxical take-away of this opinion piece is that all three proposals can be both right and wrong at the same time, depending on the level of analysis at which they are evaluated (see also Francken et al., 2022).

Before moving on, we should clarify our intentions. First, we will not try to defend these three ideas. These matters have been discussed (and continue to be discussed) in the linguistic literature at length. Instead, we will introduce each idea, explain its intended scope, and, acknowledging that it is a computational-level idea, evaluate its potential implications for the neurobiology of language. To the extent that we refer to existing neuroscience research, this is not to assess whether its empirical (implementational-level) results do or do not support the (computational-level) ideas. Rather, it is to show that the research is used to evaluate the correctness of the implementational interpretation of these ideas. To foreshadow one example, consider the thesis that syntax is computationally autonomous (discussed further in Section 3.2). Instead of evaluating whether the empirical results of neuroscience research are consistent with this computational-level idea, it often happens that the results are evaluated in terms of whether they support the idea that syntax is neuroanatomically autonomous and modular. While certainly interesting and important, that is a different question, related but not identical to the original thesis. A second caveat is that the arguments in our discussion are implementation-neutral, which means that we do not take a stance here on what the right neurobiological units or mechanisms are to describe or explain brain functioning. The fact that our discussion of the language-neuroscience literature contains mostly fMRI studies is simply because the arguments we identify as problematic are most prevalent in that literature. Nevertheless, we believe that our claims are applicable to cognitive neuroscience of language at large, whatever the correct or most useful way appears to be to describe brain activity. Even if it turns out that current views on the neural foundations of cognition are completely wrong, this will not fundamentally alter our analysis.

3. Three linguistic concepts

3.1. Competence and performance

Chomsky (1965) made a distinction between competence—our knowledge of language and linguistic structures—and performance, our use of language in concrete situations. In Marr's terms, competence is a computational-level notion, describing what we can do (our intensional capacity), while performance is defined at the algorithmic level, describing what we actually do and how we do it (Hornstein, 2015). As competence and performance refer to the same cognitive system (albeit at different levels of abstraction), their theories should ultimately constrain one another. Thus, distinguishing competence from performance neither entails that a certain linguistic behavior (described by performance models) cannot inform the competence theory, nor that aspects of the competence theory do not have to be incorporated in performance models (Marantz, 2005; Neeleman and van de Koot, 2010).

Being a capacity, competence cannot be observed directly and must be reconstructed by or inferred from observations of situations in which the capacity is put to use (Francken et al., 2022). In language, competence is an idealization over a whole range of linguistic behaviors, observed through different measurement techniques (e.g., conversations, acceptability judgments, behavioral tests, brain recordings). Described as such, the distinction between competence and performance can be useful for cognitive neuroscientists, for at least two related reasons. First, observations about behavior are often misleading about the organizational principles of the underlying capacity. Like any cognitive task, linguistic behavior is guided by knowledge in its own domain, but it is not completely determined by it. Language use is fundamentally an interaction between linguistic competence and other properties of human cognition, i.e., non-linguistic factors that affect when, how, and which structure-building algorithms are applied. Performance data are therefore inherently noisy; they hide underlying consistencies and regularities and contain more information than can be explained by any theory of language. Describing the principles of the underlying capacity necessarily requires abstracting away from performance factors that are not considered inherent to that capacity.

A second, related reason is that the competence-performance distinction mirrors the way cognitive neuroscientists approach brain recordings collected in experimental settings. In the context of neurolinguistic experiments, brain states recorded on individual trials correspond to individual acts of performance (e.g., the neural correlates of individual speech acts), and the entire collection of trials within an experiment is the performance data (akin to a small language corpus). Before abstraction, the set of brain activations will be noisy, because they contain the neural correlates of the processes that build syntactic structure, of performance factors that affect the application of these processes (e.g., attention, memory, context), and random noise (e.g., participants' movements, artifacts, scanner noise).

To get closer to (the neural basis of) the underlying capacity, some sort of abstraction or idealization is necessary. Concretely, this can be performed through averaging (in univariate analyses) or through more complicated pattern detection techniques (in multivariate analyses), both of which might be seen as quantitative instantiations of abstraction. Analogous to the linguistic notion of competence as abstraction of performance, competence in neuroscience is an abstraction of brain states (see Adger, 2022). A further level of abstraction is provided by meta-analyses, which seek functional convergence across multiple experiments to remove contingent performance effects for a particular factor of interest. This approach has been used in recent meta-analytic studies on syntactic processing and modality independence, which aim to characterize linguistic competence in neural terms (Zaccarella et al., 2017b; Walenski et al., 2019; Trettenbrein et al., 2021).

3.2. Autonomy of syntax

The autonomy-of-syntax thesis holds that syntax is computationally self-contained, meaning that its primitives and combinatorics are not completely derivable from or reducible to non-syntactic factors, such as meaning or frequency of occurrence (Chomsky, 1957). The autonomy of our syntactic system underlies our ability to judge a sentence like “colorless green ideas sleep furiously” as acceptable (and distinguish it from the reverse, and unacceptable, “furiously sleep ideas green colorless”), despite it being semantically anomalous and highly infrequent. As a statement about a capacity, the autonomy thesis makes no claims about how we arrive at this judgment. When someone judges the acceptability of a given sentence, its non-syntactic properties can and do modulate the processes underlying the person's judgment—they are likely to judge faster the acceptability of the semantically coherent “revolutionary new ideas appear infrequently”—but this is entirely consistent with the autonomy of the system qua computational properties. Thus, while the application of syntactic computations is affected by the properties of the systems with which syntax interfaces (e.g., semantics, phonology), the computations themselves (their form) are autonomous (i.e., different from semantic and phonological computations; Adger, 2018).

As the autonomy thesis is stated at the computational level of analysis, it makes no direct claims about the role of syntax in sentence processing. This is relevant to emphasize because even when a cognitive system is representationally and computationally modular, in actual comprehension all sources of information will have to be integrated. The fact that syntactic rules are autonomous does not mean that syntactic constructions are processed fully autonomously (Sprouse and Lau, 2013). Likewise, by defining the properties of the syntactic system at the computational level, no claims are made about the neurobiological implementation of that system. The autonomy thesis therefore does not predict that there is an area in the brain that is uniquely responsive to syntax. Recent neuroimaging studies have shown that syntactic combinatorics are subserved by (the interaction between) specific regions in the left inferior frontal and posterior temporal lobe, perhaps partially segregated from semantics (Pallier et al., 2011; Goucha and Friederici, 2015; Zaccarella and Friederici, 2015; Zaccarella et al., 2017a; Campbell and Tyler, 2018; Zhu et al., 2022). It is important to realize, however, that if such a neural syntax-semantics dissociation were not observed, the autonomy thesis would not have been falsified (see Mehler et al., 1984; Poeppel and Embick, 2005).

That autonomy of syntax is a computational-level claim without straightforward implications for neurobiology appears to be misunderstood sometimes. For instance, Zhu and colleagues state that the autonomy (or modularity) of syntax as a computational system is challenged by recent observations that syntactic and semantic processing both activate a frontal-temporal network in the brain, and that none of the areas involved is specific for syntax or semantics (Zhu et al., 2022). Similarly, Fedorenko and colleagues have shown that all brain areas that are responsive to syntax are also responsive to words, which they claim to be inconsistent with the idea that syntactic computations are abstract and insensitive to the nature of the units being combined (Fedorenko et al., 2012, 2020; Blank et al., 2016). The idea is that the absence of a neurobiological dissociation between syntax and (lexico-)semantics in the language network suggests that there is no cognitive or functional segregation between syntax and (lexico-)semantics. Besides being challenged on empirical grounds (see e.g., the lesion data in Matchin et al., 2022), this argument is also inferentially problematic. First, it presumes that stimuli in experiments can successfully segregate syntax and semantics, such that the linguistic input to participants only contains syntax. However, this is never the case, as syntactic features clearly cannot be presented in isolation (see also Moro, 2015; Matchin, 2023). Rather, if overtly present, they are always embedded in the morphological structure of words (e.g., agreement), in the sequential structures of phrases and sentences (e.g., word order, displacement), or they are simply properties of the words themselves (e.g., word category). In other words, syntactic features can be computationally autonomous even if they are physically realized in the non-syntactic information that makes up an utterance. It is therefore not surprising that brain areas responsive to syntax are also sensitive to words. Second, the autonomy of a computational system does not necessarily imply the segregation of its neurobiological implementation. Absence of a dissociation in the brain is entirely compatible with abstract, autonomous computations.

3.2. Abstract units of representation

Syntactic generalizations are commonly defined over abstract structures, such as noun phrases (NPs) and verb phrases (VPs), or even just phrases (XPs). For a computational-level theory, this abstract level of description is necessary to reveal deep syntactic principles that amalgamate superficially disparate phenomena. To give an example, in classical X-bar theory it was proposed that all phrases (NPs, VPs, etc.) have the same asymmetric hierarchical format, in which the head of the phrase and its complement form a unit, which then combines with a specifier to form the phrasal unit (Chomsky, 1970). Only by stating this property in abstract terms, roughly corresponding to the bracket notation [XP YP [XP X ZP]] (XP, in short), is it possible to define an overarching generalization. To the extent that it captures empirical observations (e.g., about distributional patterns within and across languages), the X-bar generalization is explanatorily valuable for the computational-level theory. During language processing, however, syntactic information never appears in isolation: in externalized language, phrases are lexicalized entities, and they must be processed as such (see Section 3.2). Though the brain has to recognize that sequences like “very fond of syntax” and “totally understand the argument” are, at some level, structurally the same and therefore subject to the same restrictions, this does not mean that they are mentally or neurally represented as fully abstract XPs. Instead, they could be represented as phrase structures whose lexical and semantic information is retained, and in which syntactic information is realized through features carried by the specific lexical items. Indeed, this type of representation is consistent with the results of neuroimaging studies that have been taken to support a constructionist view of grammatical knowledge. That is, studies have found that certain syntactic constructions are neurally distinguishable by virtue of their semantic content, which would be in line with the view that that these constructions are represented as pairings between form and meaning (Allen et al., 2012; Pulvermüller et al., 2013; van Dam and Desai, 2016; Gonering and Corina, 2023). However, these findings are equally compatible with linguistic theories that postulate syntactic generalizations over abstract structures devoid of meaning. To appreciate this point, consider, as an analogy, the interpretation of structural priming effects in the psycholinguistic literature. It is well-known that structural priming effects are sensitive to lexical overlap between the prime and the target (Branigan and Pickering, 2017). This is expected on the view that phrases are mentally represented and processed in the form of lexicalized structures rather than fully abstract templates (algorithmic level), but it does not mean that the abstract syntactic generalization, in which phrase structures are underlyingly identical, is empirically incorrect (computational level).

Similarly, syntactic operations are commonly defined over categorial types. During language processing, however, they necessarily apply to tokens that are instantiations of those types. It is therefore possible that, in neurobiological terms, combining “the” and “cat” is different from combining “a” and “dog”, even though in computational terms, both involve the composition of a noun phrase. Both combinatorial operations are constrained by the fact that determiners combine with nouns, and that this operation is hierarchical, binary, and compositional. On this view, the abstractness of the combinatorial operation lies not in its symbolic realization, but in the fact that the constraints on the operation are independent of the specific lexical items to which it applies. Note that this does not deny the possibility that averaging over a sufficiently large number of instances of minimal combinations between for example determiner-noun, adjective-noun or pronoun-verb will yield a reasonably specific neural activation pattern initially suggestive of abstract syntactic combinatorics (e.g., Goucha and Friederici, 2015; Zaccarella and Friederici, 2015; Zaccarella et al., 2017a; Segaert et al., 2018; Matar et al., 2021; Murphy et al., 2022). Rather, it indicates that the underlying neural populations are responsible for the compositional combinations of specific lexical items (instead of abstract variables), and that these compositions are constrained by the syntactic properties of those lexical items. We speculate that such context dependence might be the reason that it has proven difficult to isolate syntactic combinatorics in neural data—that is, because syntax is to be found in the constraints on the combinatorial operations, not in the operations themselves (see also Pylkkänen, 2019; Baggio, 2020).

4. Toward a multi-level theory of syntax in the brain

To integrate computational-level descriptions provided by linguistics with implementational-level theories in neuroscience, Marr (1982) suggested an intermediate level of description which specifies how the processing system can solve its computational problems—the domain of psycholinguistics. In general, mapping linguistic theories to psycholinguistic models is non-trivial, because the computational (grammatical) analysis alone underdetermines the possible (parsing) algorithms. However, the principles of computational-level theories do act as boundary conditions for models at the algorithmic level, so they should constrain our theories of language processing (Gallistel and King, 2009). That is, algorithmic theories of syntactic processing must be such that they respect the grammatical constraints defined at the computational level of syntactic competence, including constraints on representations and constraints on computations. As an example of the former, it is well-known that the semantic interpretation of phrases and sentences is derived from hierarchical structure. The implication of this result for language processing is that we can regard as deficient those algorithmic models that are unable to derive structure-dependent meanings (Coopmans et al., 2022). Regarding computations, it has been argued that structure-dependent syntactic operations, like long-distance displacement, obey locality constraints. Using this computational-level result as a boundary condition on algorithmic theories, we can conclude that models of the comprehension of filler-gap dependencies that posit gaps in island configurations are inadequate (Phillips, 2006; Chesi, 2015). Roughly corresponding to these two types of constraints (on computations and representations, respectively), we envision two ways in which the different levels of analysis could become more strongly connected in an integrated, multi-level theory of syntax in the brain.

One way to integrate linguistics and psycho-/neurolinguistics is to devise computationally explicit linking hypotheses between their levels of analysis. This is quite challenging, not only because the “parts lists” of linguistics and neuroscience are ontologically incommensurable (Poeppel and Embick, 2005; Poeppel, 2012), but also because the notion of competence is usually not formulated in a way that aligns with the requirements of real-time processing. Syntactic competence is often described as a static body of knowledge. While sentence structures are derived procedurally, and the derivations are logically ordered, the entire derivational analysis is atemporal. Hierarchical syntactic structures are commonly derived bottom-to-top, starting with the most-embedded element in the structure. Syntactic processing, instead, does take place in time, starting with the first element in the sentence regardless of its position in the hierarchical structure. To illustrate the apparent misalignment, consider a phrase like “eat the cookies”, which is derived by first combining “the” and “cookies”, and then combining “eat” with the phrase “the cookies”. The claim that “the” and “cookies” are combined first should not be interpreted temporally; it is not inconsistent with the incremental interpretation of “eat the cookies”. Thus, the logical order of syntactic derivations bears no relation to the temporal order of processing. As processing must take place in time (and derivations need not necessarily be bottom-to-top, see Phillips and Lewis, 2013 and Chesi, 2015), one way to resolve the tension would be to reformulate competence into algorithmic procedures that can be applied incrementally, in a roughly left-to-right order (Phillips, 2003; Poeppel and Embick, 2005; Sprouse and Hornstein, 2016). In this way, competence directly interacts with performance, in the sense that the former constrains the algorithmic steps that are applied. And beyond facilitating the mapping between competence and performance, there are empirical benefits of this approach as well (e.g., it can explain conflicting outcomes of certain constituency tests; Phillips, 2003).

An alternative strategy for linking levels of analysis does not involve reformulating syntactic competence but involves using the structures computed at the computational level as the ultimate goal of structure-building algorithms. In this view, syntactic derivations remain atemporal, so derivational theories of syntax should not be interpreted as theories of how mental representations are actually derived in the mind of a speaker-listener. Rather, they merely describe, in computational-level terms, the logical properties that the syntactic system must have, including constraints on the form of syntactic representations. Algorithmic theories of syntactic structure building then build only those (partial) structures that are licensed by the competence theory (in whatever way works, as there are no constraints on computations), but they do not proceed in the way dictated by the derivational analysis (see also Neeleman and van de Koot, 2010; this approach aligns with the notion of “weak competence” in Baggio, 2020). As such, there is no need for alignment between the temporal order of events in psycholinguistics and the logical orders of events in syntax. The competence theory contains an abstraction description of what the performance model does, but stays silent on how it does it.

In either case, the output of algorithmic procedures must be mapped onto neural data through implementational linking hypotheses. In neuroscience work adopting naturalistic paradigms, it is common to use surprisal or node count as the linking hypothesis (Brennan, 2016; Hale et al., 2022), but neither metric reflects an algorithmic operation (Stanojević et al., 2021; Coopmans, 2023). One type of approach that we find more promising involves Combinatory Categorial Grammars (CCGs), which have the right level of grammatical expressivity to model natural language syntax (i.e., slightly beyond context-free power). Moreover, as CCGs have flexible constituency, they afford multiple ways of algorithmically deriving structures for the same sentence (Steedman, 2000). Each of these derivations has the same compositional semantic interpretation, which is assigned and updated incrementally, making this model suited for modeling language processing. Indeed, the use of CCGs is promising on both predictive and explanatory measures of empirical success. In terms of prediction, recent naturalistic fMRI studies have shown that complexity metrics directly derived from CCG derivations improve predictive accuracy in regions of the language network above and beyond predictors derived from context-free phrase structure (Stanojević et al., 2021, 2023). With respect to explanation, a clear benefit of this algorithm-centered approach is that it yields explicit theories about the computations that must be implemented in the identified brain regions. It commonly still relies on localization of functions, but at least the functions are made explicit (Mehler et al., 1984; Poeppel, 2012; Martin, 2020; Westlin et al., 2023).

To summarize, the goal of this paper was to clarify the interpretation of three key ideas in linguistics (competence vs. performance, autonomy of syntax, and the nature of syntactic representations), in order to advance the integration of linguistic theory with the neurobiology of language. Taking a levels-of-analysis perspective, we suggest that one source of misunderstanding about these ideas is that they are interpreted at the wrong level of analysis. A multi-level approach to syntax, in which different concepts are explicitly defined and interpreted at the appropriate level, can give rise to a fruitful exploration of the linking hypotheses across levels, at the interface between linguistics and neuroscience.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

Conceptualization, writing—original draft, and writing—review and editing: CC and EZ. All authors contributed to the article and approved the submitted version.

Funding

CC was supported by the Netherlands Organization for Scientific Research grant 016.Vidi.188.029, awarded to CC's supervisor Andrea E. Martin.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Adger, D. (2018). “The autonomy of syntax,” in Syntactic structures after 60 Years: The impact of the Chomskyan Revolution in Linguistics, eds. N. Hornstein, H. Lasnik, P. Patel-Grosz, and C. Yang. The Hague: De Gruyter Mouton, 153–175

Google Scholar

Adger, D. (2022). What are linguistic representations? Mind Lang. 37, 248–260. doi: 10.1111/mila.12407

CrossRef Full Text | Google Scholar

Allen, K., Pereira, F., Botvinick, M., and Goldberg, A. E. (2012). Distinguishing grammatical constructions with fMRI pattern analysis. Brain Lang. 123, 174–182. doi: 10.1016/j.bandl.2012.08.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Baggio, G. (2020). “Epistemic transfer between linguistics and neuroscience: problems and prospects,” in The Philosophy and Science of Language: Interdisciplinary Perspectives, eds. R. M. Nefdt, C. Klippi, and B. Karstens. Cham: Springer International Publishing, 275–308.

Google Scholar

Blank, I., Balewski, Z., Mahowald, K., and Fedorenko, E. (2016). Syntactic processing is distributed across the language system. Neuroimage 127, 307–323. doi: 10.1016/j.neuroimage.2015.11.069

PubMed Abstract | CrossRef Full Text | Google Scholar

Branigan, H. P., and Pickering, M. J. (2017). An experimental approach to linguistic representation. Behav. Brain Sci. 40, e282. doi: 10.1017/S0140525X16002028

CrossRef Full Text | Google Scholar

Brennan, J. (2016). Naturalistic sentence comprehension in the brain. Lang. Linguist. Compass 10, 299–313. doi: 10.1111/lnc3.12198

CrossRef Full Text | Google Scholar

Campbell, K. L., and Tyler, L. K. (2018). Language-related domain-specific and domain-general systems in the human brain. Curr. Opin. Behav. Sci. 21, 132–137. doi: 10.1016/j.cobeha.2018.04.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Chesi, C. (2015). On directionality of phrase structure building. J. Psycholinguist. Res. 44, 65–89. doi: 10.1007/s10936-014-9330-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Chomsky, N. (1957). Syntactic Structures. Berlin: Mouton de Gruyter.

Google Scholar

Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge: MIT Press.

Google Scholar

Chomsky, N. (1970). “Remarks on nominalization,” in Readings in English Transformational Grammar, eds. R. Jacobs, and P. Rosenbaum. Boston: Ginn, 184–221.

Google Scholar

Coopmans, C. W. (2023). Triangles in the Brain: The Role of Hierarchical Structure in Language Use (PhD Thesis). Nijmegen: Radboud University Nijmegen.

Google Scholar

Coopmans, C. W., de Hoop, H., Kaushik, K., Hagoort, P., and Martin, A. E. (2022). Hierarchy in language interpretation: Evidence from behavioural experiments and computational modelling. Lang. Cogn. Neurosci. 37, 420–439. doi: 10.1080/23273798.2021.1980595

CrossRef Full Text | Google Scholar

Embick, D., and Poeppel, D. (2015). Towards a computational(ist) neurobiology of language: Correlational, integrated and explanatory neurolinguistics. Lang. Cogn. Neurosci. 30, 357–366. doi: 10.1080/23273798.2014.980750

PubMed Abstract | CrossRef Full Text | Google Scholar

Fedorenko, E., Blank, I. A., Siegelman, M., and Mineroff, Z. (2020). Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition 203, 104348. doi: 10.1016/j.cognition.2020.104348

PubMed Abstract | CrossRef Full Text | Google Scholar

Fedorenko, E., Nieto-Castañon, A., and Kanwisher, N. (2012). Lexical and syntactic representations in the brain: An fMRI investigation with multi-voxel pattern analyses. Neuropsychologia 50, 499–513. doi: 10.1016/j.neuropsychologia.2011.09.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Francken, J. C., Slors, M., and Craver, C. F. (2022). Cognitive ontology and the search for neural mechanisms: Three foundational problems. Synthese 200, 378. doi: 10.1007/s11229-022-03701-2

CrossRef Full Text | Google Scholar

Gallistel, C. R., and King, A. P. (2009). Memory and the Computational Brain. Hoboken, NJ: Wiley-Blackwell.

Google Scholar

Gonering, B., and Corina, D. P. (2023). The neurofunctional network of syntactic processing: Cognitive systematicity and representational specializations of objects, actions, and events. Front. Lang. Sci. 2, 1176233 doi: 10.3389/flang.2023.1176233

CrossRef Full Text | Google Scholar

Goucha, T., and Friederici, A. D. (2015). The language skeleton after dissecting meaning: a functional segregation within Broca's Area. Neuroimage 114, 294–302. doi: 10.1016/j.neuroimage.2015.04.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Grimaldi, M. (2012). Toward a neural theory of language: Old issues and new perspectives. J. Neurolinguistics 25, 304–327. doi: 10.1016/j.jneuroling.2011.12.002

CrossRef Full Text | Google Scholar

Hale, J. T., Campanelli, L., Li, J., Bhattasali, S., Pallier, C., and Brennan, J. R. (2022). Neurocomputational models of language processing. Annu. Rev. Linguist. 8, 427–446. doi: 10.1146/annurev-linguistics-051421-020803

CrossRef Full Text | Google Scholar

Hornstein, N. (2015). “The rationalism of Generative Grammar,” in MIT Working Papers in Linguistics, Á. J. Gallego, and D. Ott, 147–157.

Google Scholar

Johnson, M. (2017). Marr's levels and the minimalist program. Psychon. Bull. Rev. 24, 171–174. doi: 10.3758/s13423-016-1062-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Marantz, A. (2005). Generative linguistics within the cognitive neuroscience of language. Linguist. Rev. 22, 429–445. doi: 10.1515/tlir.2005.22.2-4.429

CrossRef Full Text | Google Scholar

Marr, D. (1982). Vision. New York: W. H. Freeman and Co.

Google Scholar

Martin, A. E. (2020). A compositional neural architecture for language. J. Cogn. Neurosci. 32, 1407–1427. doi: 10.1162/jocn_a_01552

CrossRef Full Text | Google Scholar

Matar, S., Dirani, J., Marantz, A., and Pylkkänen, L. (2021). Left posterior temporal cortex is sensitive to syntax within conceptually matched Arabic expressions. Sci. Rep. 11, 7181. doi: 10.1038/s41598-021-86474-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Matchin, W. (2023). Lexico-semantics obscures lexical syntax. Front. Lang. Sci. 2, 1217837. doi: 10.3389/flang.2023.1217837

CrossRef Full Text | Google Scholar

Matchin, W., Basilakos, A., Ouden, D.-B., den Stark, B. C., Hickok, G., and Fridriksson, J. (2022). Functional differentiation in the language network revealed by lesion-symptom mapping. Neuroimage 247, 118778. doi: 10.1016/j.neuroimage.2021.118778

PubMed Abstract | CrossRef Full Text | Google Scholar

Mehler, J., Morton, J., and Jusczyk, P. W. (1984). On reducing language to biology. Cognit. Neuropsychol. 1, 83–116. doi: 10.1080/02643298408252017

CrossRef Full Text | Google Scholar

Moro, A. (2015). The Boundaries of Babel: The Brain and the Enigma of Impossible Languages. Cambridge, MA: MIT Press.

Google Scholar

Murphy, E., Woolnough, O., Rollo, P. S., Roccaforte, Z. J., Segaert, K., Hagoort, P., et al. (2022). Minimal phrase composition revealed by intracranial recordings. J. Neurosci. 42, 3216–3227. doi: 10.1523/JNEUROSCI.1575-21.2022

PubMed Abstract | CrossRef Full Text | Google Scholar

Neeleman, A., and van de Koot, H. (2010). “Theoretical validity and psychological reality of the grammatical code,” in The Linguistics Enterprise: From Knowledge of Language to Knowledge in Linguistics, eds. M. B. H. Everaert, T. Lentz, H. N. M. De Mulder, Ø. Nilsen, and A. Zondervan. Amsterdam: John Benjamins Publishing Company, 183–212.

Google Scholar

Pallier, C., Devauchelle, A.-D., and Dehaene, S. (2011). Cortical representation of the constituent structure of sentences. Proc. Nat. Acad. Sci. 108, 2522–2527. doi: 10.1073/pnas.1018711108

PubMed Abstract | CrossRef Full Text | Google Scholar

Phillips, C. (2003). Linear order and constituency. Lingu. Inquiry 34, 37–90. doi: 10.1162/002438903763255922

CrossRef Full Text | Google Scholar

Phillips, C. (2006). The real-time status of island phenomena. Language 82, 795–823. doi: 10.1353/lan.2006.0217

CrossRef Full Text | Google Scholar

Phillips, C., and Lewis, S. (2013). Derivational order in syntax: evidence and architectural consequences. Stud. Linguist. 6, 11–47.

Google Scholar

Poeppel, D. (2012). The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language. Cognit. Neuropsychol. 29, 34–55. doi: 10.1080/02643294.2012.710600

PubMed Abstract | CrossRef Full Text | Google Scholar

Poeppel, D., and Embick, D. (2005). “Defining the relation between linguistics and neuroscience,” in Twenty-First Century Psycholinguistics: Four Cornerstones, ed. A. Cutler. London: Routledge, 103–118.

Google Scholar

Poeppel, D., Idsardi, W. J., and van Wassenhove, V. (2008). Speech perception at the interface of neurobiology and linguistics. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 363, 1071–1086. doi: 10.1098/rstb.2007.2160

PubMed Abstract | CrossRef Full Text | Google Scholar

Pulvermüller, F., Cappelle, B., and Shtyrov, Y. (2013). “Brain basis of meaning, words, constructions, and grammar,” in The Oxford Handbook of Construction Grammar, eds. T. Hoffmann, and G. Trousdale. Oxford: Oxford University Press, 397–416.

Google Scholar

Pylkkänen, L. (2019). The neural basis of combinatory syntax and semantics. Science 366, 62–66. doi: 10.1126/science.aax0050

PubMed Abstract | CrossRef Full Text | Google Scholar

Segaert, K., Mazaheri, A., and Hagoort, P. (2018). Binding language: structuring sentences through precisely timed oscillatory mechanisms. Eur. J. Neurosci. 48, 2651–2662. doi: 10.1111/ejn.13816

PubMed Abstract | CrossRef Full Text | Google Scholar

Sprouse, J., and Hornstein, N. (2016). “Syntax and the cognitive neuroscience of syntactic structure building,” in Neurobiology of Language, eds. G. Hickok, and S. L. Small. London: Elsevier, 165–174.

Google Scholar

Sprouse, J., and Lau, E. F. (2013). “Syntax and the brain,” in The Cambridge Handbook of Generative Syntax, ed. M. Den Dikken. Cambridge: Cambridge University Press, 971–1005.

Google Scholar

Stanojević, M., Bhattasali, S., Dunagan, D., Campanelli, L., Steedman, M., Brennan, J., et al. (2021). “Modeling incremental language comprehension in the brain with Combinatory Categorial Grammar,” in Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (Association for Computational Linguistics), 23–38.

Google Scholar

Stanojević, M., Brennan, J. R., Dunagan, D., Steedman, M., and Hale, J. T. (2023). Modeling structure- building in the brain with CCG parsing and large language models. Cogn. Sci. 47, e13312. doi: 10.1111/cogs.13312

PubMed Abstract | CrossRef Full Text | Google Scholar

Steedman, M. (2000). The Syntactic Process. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/6591.001.0001

CrossRef Full Text | Google Scholar

Trettenbrein, P. C., Papitto, G., Friederici, A. D., and Zaccarella, E. (2021). Functional neuroanatomy of language without speech: An ALE meta-analysis of sign language. Hum. Brain Mapp. 42, 699–712. doi: 10.1002/hbm.25254

PubMed Abstract | CrossRef Full Text | Google Scholar

van Dam, W. O., and Desai, R. H. (2016). The semantics of syntax: the grounding of transitive and intransitive constructions. J. Cogn. Neurosci. 28, 693–709. doi: 10.1162/jocn_a_00926

PubMed Abstract | CrossRef Full Text | Google Scholar

Walenski, M., Europa, E., Caplan, D., and Thompson, C. K. (2019). Neural networks for sentence comprehension and production: an ALE-based meta-analysis of neuroimaging studies. Hum. Brain Mapp. 40, 2275–2304. doi: 10.1002/hbm.24523

PubMed Abstract | CrossRef Full Text | Google Scholar

Westlin, C., Theriault, J. E., Katsumi, Y., Nieto-Castanon, A., Kucyi, A., Ruf, S. F., et al. (2023). Improving the study of brain-behavior relationships by revisiting basic assumptions. Trends Cogn. Sci. 27, 3. doi: 10.1016/j.tics.2022.12.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Zaccarella, E., and Friederici, A. D. (2015). Merge in the human brain: a sub-region based functional investigation in the left pars opercularis. Front. Psychol. 6, 1818. doi: 10.3389/fpsyg.2015.01818

PubMed Abstract | CrossRef Full Text | Google Scholar

Zaccarella, E., Meyer, L., Makuuchi, M., and Friederici, A. D. (2017a). Building by syntax: The neural basis of minimal linguistic structures. Cerebral Cortex 27, 411–421.

PubMed Abstract | Google Scholar

Zaccarella, E., Schell, M., and Friederici, A. D. (2017b). Reviewing the functional basis of the syntactic Merge mechanism for language: a coordinate-based activation likelihood estimation meta-analysis. Neurosci. Biobeh. Rev. 80, 646–656. doi: 10.1016/j.neubiorev.2017.06.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, Y., Xu, M., Lu, J., Hu, J., Kwok, V. P. Y., Zhou, Y., et al. (2022). Distinct spatiotemporal patterns of syntactic and semantic processing in human inferior frontal gyrus. Nat. Hum. Behav. 6, 1104–1111. doi: 10.1038/s41562-022-01334-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: computation, algorithm, implementation, linking hypothesis, autonomy of syntax

Citation: Coopmans CW and Zaccarella E (2023) Three conceptual clarifications about syntax and the brain. Front. Lang. Sci. 2:1218123. doi: 10.3389/flang.2023.1218123

Received: 06 May 2023; Accepted: 13 October 2023;
Published: 01 November 2023.

Edited by:

Valery Solovyev, Kazan Federal University, Russia

Reviewed by:

Mirko Grimaldi, University of Salento, Italy

Copyright © 2023 Coopmans and Zaccarella. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Cas W. Coopmans, Y2FzLmNvb3BtYW5zJiN4MDAwNDA7bXBpLm5s

ORCID: Cas W. Coopmans orcid.org/0000-0001-7622-3161
Emiliano Zaccarella orcid.org/0000-0002-5703-1778

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.