Skip to main content

OPINION article

Front. Psychol., 28 November 2018
Sec. Psychology of Language

Merging Generative Linguistics and Psycholinguistics

  • Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain

Introduction

There have been constant debates about the connection between the theoretical postulates of generative linguistics (Chomsky, 1965, 1995) and the experimental research carried out in psycho-/neurolinguistics (see Poeppel and Embick, 2005; Embick and Poeppel, 2015). This cross-disciplinary relationship has been approached from noticeable distinct positions, including some views taking generative accounts as well-suited on their own for experimental investigation (e.g., Marantz, 2005; Sprouse and Hornstein, 2016), others proposing a reconsideration of certain generative assumptions about processing issues (e.g., Lewis and Phillips, 2015; Mancini, 2018), and others openly advocating for adopting alternative linguistic frameworks (e.g., Townsend and Bever, 2001; Ferreira, 2005; Jackendoff, 2007; Christiansen and Chater, 2017). Although real-time processing data are generally consistent with theoretical considerations (see Phillips and Wagers, 2007; Lewis and Phillips, 2015), it is noteworthy that experimental results from psycholinguistics are rarely incorporated into generative accounts (cf. Chomsky, 2005). This limited interaction between disciplines is motivated by the competence-performance distinction (Chomsky, 1965), also instantiated as the computational and algorithmic levels of analysis (Marr, 1982), with linguistics developing formal accounts of grammatical phenomena (i.e., competence/computation) independently from the psycholinguistic evidence about how they manifest in real-time processing (i.e., performance/algorithm).

One way to facilitate the cross-talk between linguistics and psycholinguistics is to assume a transparent computation-to-algorithm mapping in which these two levels represent distinct timescales of the same cognitive system: the computation denotes its offline properties and the algorithm its online execution (Lewis and Phillips, 2015). As a result, the algorithm might involve the processing-sensitive internal stages of the computation (Lewis and Phillips, 2015; see also Mancini, 2018). In this view, the two disciplines could directly inform each other to establish a unified computational-algorithmic account of language—a (psycho)linguistic theory (Embick and Poeppel, 2015)—where their respective insights provide mutual constraints for explaining fundamental aspects of the linguistic system. Accounts along these lines have recently been proposed to capture phenomena such as grammaticality illusions (see Lewis and Phillips, 2015) and agreement (Mancini, 2018). This opinion article aims to show how this cross-disciplinary approach can be effectively extended to a new area: the generation of syntactic structures.

Building Syntactic Structures in Real Time

A critical requirement for advancing the collaboration between generative linguistics and psycholinguistics is explaining how sentential representations are incrementally constructed in real time (i.e., roughly on a word-by-word basis). While the surface form of sentences comprises words ordered linearly, the sentence-level meaning is established by the abstract hierarchical structure arising from certain syntactic relations (see Everaert et al., 2015). Given that syntactic structure is not obvious from the external input, it must be generated internally, presumably in accordance with the rules and constraints of some sort of real-time structure-building mechanism. In this respect, any computational-algorithmic account of structure generation would require specifying the formal properties of such a mechanism and, in addition, determining how its time-dependent steps are executed during online sentence processing1.

In generative linguistics, this structure-building mechanism is formally instantiated by the computational operation Merge (Chomsky, 1995), which recursively selects and combines syntactic units to create new structures that are hierarchically organized (see also Chomsky, 2013)2. Importantly, Merge is strictly conceived at the competence domain as an offline computation without real-time implications. This is evidenced by the fact that Merge generates sentences from right to left, namely in the opposite direction to that required by language processing, and consequently remains uncommitted on how words are hierarchically combined in real time (see Chesi, 2012, 2015 for a detailed discussion). Phillips (1996) precisely resolves such timing concerns by proposing that Merge proceeds in reverse order (i.e., from left to right) during processing, thus providing a better characterization of Merge in computational-algorithmic terms (see also Phillips, 2003).

Interestingly, recent neurolinguistic research has shown that neural oscillations exhibit subtle sensitivity to abstract syntactic structure, either tracking the hierarchical levels of speech sequences without prosodic cues (Ding et al., 2016) or showing spectro-temporal modulations driven by underlying hierarchical structure (Nelson et al., 2017). This reflection of hierarchical structure in oscillatory patterns suggests that the structure-building mechanism could incrementally group words together in a Merge-like manner during sentence comprehension (cf. Frank and Christiansen, 2018). Note that both Nelson et al.'s and Ding et al.'s results seem to index the bottom-up integration of sensory information after its presentation (i.e., post-stimulus evoked activity, see Ding and Simon, 2014; Ding et al., 2017). Likewise, Merge is currently formalized as initially selecting and subsequently combining syntactic units (see Chomsky, 2013; Murphy, 2015) and, in processing terms, this would imply the integration (i.e., selection and combination) of syntactic information that has already occurred as external input. Thus, in line with the proposal that Merge could be operationalized in real time (Phillips, 1996, 2003), the reported oscillatory patterns might reflect the output of a Merge-like mechanism that integrates chunks of bottom-up information quickly and in a hierarchically-structured way.

However, sentence processing is not limited to bottom-up integrative analyses. Indeed, recent proposals and findings have underscored the role of top-down predictive processes (Altmann and Mirković, 2009; DeLong et al., 2014; Staub, 2015; Kuperberg and Jaeger, 2016; cf. Huettig and Mani, 2016). In other words, besides the incremental integration of bottom-up information from the sensory input, sentence processing also involves top-down expectations regarding upcoming information before its actual occurrence, which may subsequently manifest as facilitated integration of the predicted material. This raises the question of whether the incremental Merge-like mechanism that generates syntactic structures also incorporates a predictive component, with top-down syntactic predictions preceding and also modulating the bottom-up processing of sentences.

A Predictive Structure-Building Mechanism?

As noted, the strictly integrative formalization of Merge limits its suitability for properly capturing how syntactic structures could be built predictively. Thus, computational-algorithmic adequacy requires that the Merge-like mechanism involve two computational substeps: a top-down stage for predicting syntactic structure and a bottom-up stage for syntactically integrating actual input (cf. Chesi, 2012, 2015 for an alternative predictive account). Note that the latter (but not the former) stage is reflected in the left-to-right Merge proposed by Phillips (1996, 2003). Moreover, regarding which specific type of syntactic information is preactivated, one possibility is that syntactic prediction takes place probabilistically for a broad range of syntactic units (Levy, 2008). However, since sentences are potentially infinite, a more parsimonious strategy might be to avoid the prediction of optional syntactic information (i.e., adjuncts), similarly to the comprehension model proposed by Hale (2011). Therefore, a plausible hypothesis could be that, under normal circumstances, top-down syntactic expectations mainly concern the core structural elements required in sentences, namely verbs and their arguments (e.g., subjects and objects), while optional information such as adjuncts (e.g., adverbs) primarily involves bottom-up processing without prior activation. For instance, in the sentence “children ate cookies yesterday,” the processing of the subject “children” would preactivate the syntactic position for the sentential-level mandatory verb (e.g., “ate”). Since the processed verb is transitive, it would trigger the prediction of its syntactically-required object (e.g., “cookies”) but not of any optional adverb (e.g., “yesterday”)3.

Crucially, this hypothesis is consistent with behavioral evidence (mostly) from eye-tracking experiments. First, in verb-final languages such as Japanese or German, syntactic violations associated with arguments show reading disruptions before the verb appears, indicating that some type of verb-related structural information is preactivated (see Phillips and Lau, 2004). In addition, the presence of arguments (relative to adjuncts) results in facilitated integration of clause-final verbs (Levy and Keller, 2013), which suggests that mandatory information strongly contributes to the prediction of syntactic units such as verbs (although this facilitation might be lexical in nature, see Husain et al., 2014). Second, using the visual-world paradigm, anticipatory eye-movements toward images referring to object nouns are selectively triggered in specific syntactic contexts (Kamide et al., 2003), and are modulated by lexico-syntactic factors such as verb (in)transitivity (Arai and Keller, 2013), thereby providing compelling evidence for the prior activation of these obligatory syntactic units under particular structural conditions. Furthermore, syntactic-category violations (such as subject/object nouns instead of verbs and vice versa) presented in the parafovea seem to reduce skipping rates (Brothers and Traxler, 2016; Snell et al., 2017), suggesting that syntactic contexts generate expectations for upcoming words with such syntactic specifications before they are fixated.

Interestingly, previous reading findings could be reinterpreted in terms of the current hypothesis that syntactic prediction is largely confined to verbs and their syntactically-required elements. Specifically, Staub (2007) found that a postverbal noun like “the vet” produced an early reading disruption when it could be the subject of the upcoming main clause, i.e., after an intransitive verb (“When the dog arrived the vet …”), relative to being the object of the embedded clause, i.e., after a transitive verb (“When the dog scratched the vet …”). While Staub (2007) attributed this reading slowdown to a general processing cost for a noun starting a main clause, such effects could reflect the combination of two predictive aspects: (i) a processing cost for an unexpected postverbal noun in an intransitive context (and, in turn, a processing advantage for the predicted object noun following a transitive verb), and (ii) increased processing demands for a subject generating the prediction for the upcoming main-clause verb4. Moreover, a predictive explanation along these lines also seems to account for the observation that arguments lead to faster reading times compared to adjuncts (see Tutunjian and Boland, 2008), reflecting facilitated integration for expected arguments. Therefore, converging evidence from psycholinguistic experiments suggests that the Merge-like mechanism should indeed include a predictive component that selectively generates expectations for structurally-required elements.

Conclusion

In summary, previous behavioral evidence seems to support the possibility that syntactic prediction is a significant property of the Merge-like structure-building mechanism and, more concretely, the hypothesis that such expectations are primarily generated for core syntactic units (i.e., verbs and their arguments). However, more research is needed to narrow down the computational-algorithmic explanation of predictive structure generation, which should be extendable to a broader variety of structural contexts (see Gibson, 2006; Staub and Clifton, 2006; Linzen and Jaeger, 2015; Omaki et al., 2015) and typologically different languages (e.g., head-final). For example, do top-down predictions underlie the projection of single syntactic nodes (as shown in the example sentence above) or the retrieval of structural templates (as in alternative proposals, e.g., Townsend and Bever, 2001; Jackendoff, 2007; Bever and Poeppel, 2010)? Also, what is the representational format of the predicted content: high-level abstract properties of syntactic types like “verb” (as assumed here) and/or low-level sensory (e.g., orthographic) forms associated with syntactic categories (e.g., Dikker et al., 2010; Farmer et al., 2015)? Moreover, regarding neural implementation, the details of a predictive structure-building mechanism should be explicitly linked to the oscillatory dynamics of language comprehension and production (Lewis and Bastiaansen, 2015; Molinaro et al., 2016; Meyer, 2017; Segaert et al., 2018). In line with the (psycho)linguistic theory explored here, reciprocal contributions from theoretical and experimental (both behavioral and neurophysiological) work are fundamental to address such issues and shed further light on the multilevel nature of syntactic structures.

In conclusion, the above argumentation about syntactic structure generation shows that, assuming a transparent mapping between computation and algorithm, psycholinguistic findings on predictive processing can be effectively combined with formalizations of computational operations such as Merge from generative linguistics to refine our understanding of the structure-building mechanism. Therefore, such a cross-disciplinary approach supposes a promising strategy toward a comprehensive multilevel theory of language integrated within cognitive neuroscience.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Funding

The author acknowledges financial support from the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa Programme for Centres/Units of Excellence in R&D (SEV-2015-490).

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The reviewer CC and handling Editor declared their shared affiliation.

Acknowledgments

I thank Simona Mancini for insightful remarks on earlier versions of this article, and reviewer for constructive comments and suggestions. Also thanks to Iria de Dios-Flores, Nicola Molinaro, Magda Altman, and Arella Gussow for helpful discussion and comments.

Footnotes

1. ^This article focuses on sentence comprehension, but the same concerns apply to language production (see Chesi, 2012; Momma and Phillips, 2018).

2. ^Other frameworks hypothesize alternative operations with different formal characteristics and processing implications (e.g., Unification, Jackendoff, 2007). Due to space limitations, such alternatives cannot be explored here (see Phillips and Lau, 2004, for discussion).

3. ^As a reviewer points out, this distinction between arguments and adjuncts could be confounded by co-occurrence factors. Indeed, empirical research (e.g., corpora-based probabilistic measures, see Hale, 2016) will be needed to assess such confounds.

4. ^These two effects would correspond to surprisal (Levy, 2008) and entropy reduction (Hale, 2003, 2011), respectively, under corpora-based probabilistic accounts of syntax (see Hale, 2016).

References

Altmann, G. T., and Mirković, J. (2009). Incrementality and prediction in human sentence processing. Cogn. Sci. 33, 583–609. doi: 10.1111/j.1551-6709.2009.01022.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Arai, M., and Keller, F. (2013). The use of verb-specific information for prediction in sentence processing. Lang. Cogn. Process. 28, 525–560. doi: 10.1080/01690965.2012.658072

CrossRef Full Text | Google Scholar

Bever, T. G., and Poeppel, D. (2010). Analysis by synthesis: a (re-)emerging program of research for language and vision. Biolinguistics 43, 174–200. Available online at: https://www.biolinguistics.eu/index.php/biolinguistics/article/view/162

Google Scholar

Brothers, T., and Traxler, M. J. (2016). Anticipating syntax during reading : evidence from the boundary change paradigm. J. Exp. Psychol. Learn. Mem. Cogn. 42, 1894–1906. doi: 10.1037/xlm0000257

PubMed Abstract | CrossRef Full Text | Google Scholar

Chesi, C. (2012). Competence and Computation: Toward a Processing Friendly Minimalist Grammar. Padova: UNIPRESS.

Google Scholar

Chesi, C. (2015). On directionality of phrase structure building. J. Psycholinguist. Res. 44, 65–89. doi: 10.1007/s10936-014-9330-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.

Google Scholar

Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press.

Google Scholar

Chomsky, N. (2005). Three Factors in Language Design. Linguist. Inquiry 36, 1–22. doi: 10.1162/0024389052993655

CrossRef Full Text | Google Scholar

Chomsky, N. (2013). Problems of projection. Lingua 130, 33–49. doi: 10.1016/j.lingua.2012.12.003

CrossRef Full Text | Google Scholar

Christiansen, M. H., and Chater, N. (2017). Towards an integrated science of language. Nat. Hum. Behav. 1, 1–3. doi: 10.1038/s41562-017-0163

CrossRef Full Text | Google Scholar

DeLong, K. A., Troyer, M., and Kutas, M. (2014). Pre-processing in sentence comprehension: sensitivity to likely upcoming meaning and structure. Linguist. Lang. Compass 8, 631–645. doi: 10.1111/lnc3.12093

PubMed Abstract | CrossRef Full Text | Google Scholar

Dikker, S., Rabagliati, H., Farmer, T., and Pylkkänen, L. (2010). Early occipital sensitivity to syntactic category is based on form typicality. Psychol. Sci. 21, 629–634. doi: 10.1177/0956797610367751

PubMed Abstract | CrossRef Full Text | Google Scholar

Ding, N., Melloni, L., Yang, A., Wang, Y., Zhang, W., and Poeppel, D. (2017). Characterizing neural entrainment to hierarchical linguistic units using electroencephalography (EEG). Front. Hum. Neurosci. 11:481. doi: 10.3389/fnhum.2017.00481

PubMed Abstract | CrossRef Full Text | Google Scholar

Ding, N., Melloni, L., Zhang, H., Tian, X., and Poeppel, D. (2016). Cortical tracking of hierarchical linguistic structures in connected speech. Nat. Neurosci. 19, 158–164. doi: 10.1038/nn.4186

PubMed Abstract | CrossRef Full Text | Google Scholar

Ding, N., and Simon, J. Z. (2014). Cortical entrainment to continuous speech: functional roles and interpretations. Front. Hum. Neurosci. 8:311. doi: 10.3389/fnhum.2014.00311

PubMed Abstract | CrossRef Full Text | Google Scholar

Embick, D., and Poeppel, D. (2015). Towards a computational(ist) neurobiology of language: correlational, integrated and explanatory neurolinguistics. Lang. Cogn. Neurosci. 30, 357–366. doi: 10.1080/23273798.2014.980750

PubMed Abstract | CrossRef Full Text | Google Scholar

Everaert, M. B., Huybregts, M. A., Chomsky, N., Berwick, R. C., and Bolhuis, J. J. (2015). Structures, not strings: linguistics as part of the cognitive sciences. Trends Cogn. Sci. 19, 729–743. doi: 10.1016/j.tics.2015.09.008

CrossRef Full Text | Google Scholar

Farmer, T. A., Yan, S., Bicknell, K., and Tanenhaus, M. K. (2015). Form-to-expectation matching effects on first-pass eye movement measures during reading. J. Exp. Psychol. Hum. Percept. Perform. 41, 958–976. doi: 10.1037/xhp0000054

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferreira, F. (2005). Psycholinguistics, formal grammars, and cognitive science. Linguist. Rev. 22, 365–380. doi: 10.1515/tlir.2005.22.2-4.365

CrossRef Full Text | Google Scholar

Frank, S. L., and Christiansen, M. H. (2018). Hierarchical and sequential processing of language. Lang. Cogn. Neurosci. 33, 1213–1218. doi: 10.1080/23273798.2018.1424347

CrossRef Full Text | Google Scholar

Gibson, E. (2006). The interaction of top-down and bottom-up statistics in the resolution of syntactic category ambiguity. J. Mem. Lang. 54, 363–388. doi: 10.1016/j.jml.2005.12.005

CrossRef Full Text | Google Scholar

Hale, J. (2003). The information conveyed by words in sentences. J. Psycholinguist. Res. 32, 101–123. doi: 10.1023/A:1022492123056

PubMed Abstract | CrossRef Full Text | Google Scholar

Hale, J. (2011). What a rational parser would do. Cogn. Sci. 35, 399–443. doi: 10.1111/j.1551-6709.2010.01145.x

CrossRef Full Text | Google Scholar

Hale, J. (2016). Information-theoretical complexity metrics. Linguist. Lang. Compass 10, 397–412. doi: 10.1111/lnc3.12196

CrossRef Full Text | Google Scholar

Huettig, F., and Mani, N. (2016). Is prediction necessary to understand language? Probably not. Lang. Cogn. Neurosci. 31, 19–31. doi: 10.1080/23273798.2015.1072223

CrossRef Full Text | Google Scholar

Husain, S., Vasishth, S., and Srinivasan, N. (2014). Strong expectations cancel locality effects: evidence from Hindi. PLoS ONE 9:e100986. doi: 10.1371/journal.pone.0100986

PubMed Abstract | CrossRef Full Text | Google Scholar

Jackendoff, R. (2007). A parallel architecture perspective on language processing. Brain Res. 1146, 2–22. doi: 10.1016/j.brainres.2006.08.111

PubMed Abstract | CrossRef Full Text | Google Scholar

Kamide, Y., Altmann, G. T., and Haywood, S. L. (2003). The time-course of prediction in incremental sentence processing: evidence from anticipatory eye movements. J. Mem. Lang. 49, 133–156. doi: 10.1016/S0749-596X(03)00023-8

CrossRef Full Text | Google Scholar

Kuperberg, G. R., and Jaeger, T. F. (2016). What do we mean by prediction in language comprehension? Lang. Cogn. Neurosci. 31, 32–59. doi: 10.1080/23273798.2015.1102299

PubMed Abstract | CrossRef Full Text | Google Scholar

Levy, R. (2008). Expectation-based syntactic comprehension. Cognition 106, 1126–1177. doi: 10.1016/j.cognition.2007.05.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Levy, R., and Keller, F. (2013). Expectation and locality effects in German verb-final structures. J. Mem. Lang. 68, 199–222. doi: 10.1016/j.jml.2012.02.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Lewis, A. G., and Bastiaansen, M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex 68, 155–168. doi: 10.1016/j.cortex.2015.02.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Lewis, S., and Phillips, C. (2015). Aligning Grammatical Theories and Language Processing Models. J. Psycholinguist. Res. 44, 27–46. doi: 10.1007/s10936-014-9329-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Linzen, T., and Jaeger, T. F. (2015). Uncertainty and expectation in sentence processing: evidence from subcategorization distributions. Cogn. Sci. 40, 1382–1411. doi: 10.1111/cogs.12274

PubMed Abstract | CrossRef Full Text | Google Scholar

Mancini, S. (2018). When grammar and parsing agree. Front. Psychol. 9:336. doi: 10.3389/fpsyg.2018.00336

PubMed Abstract | CrossRef Full Text | Google Scholar

Marantz, A. (2005). Generative linguistics within the cognitive neuroscience of language. Linguist. Rev. 22, 429–445. doi: 10.1515/tlir.2005.22.2-4.429

CrossRef Full Text | Google Scholar

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco, CA: WH Freeman.

Meyer, L. (2017). The neural oscillations of speech processing and language comprehension: state of the art and emerging mechanisms. Eur. J. Neurosci. 48, 2609–2621. doi: 10.1111/ejn.13748

PubMed Abstract | CrossRef Full Text | Google Scholar

Molinaro, N., Monsalve, I. F., and Lizarazu, M. (2016). Is there a common oscillatory brain mechanism for producing and predicting language? Lang. Cogn. Neurosci. 31, 145–158. doi: 10.1080/23273798.2015.1077978

CrossRef Full Text | Google Scholar

Momma, S., and Phillips, C (2018). The relationship between parsing and generation. Annu. Rev. Linguist. 4, 233–256. doi: 10.1146/annurev-linguistics-011817-045719

CrossRef Full Text | Google Scholar

Murphy, E. (2015). Labels, cognomes, and cyclic computation: An ethological perspective. Front. Psychol. 6:715. doi: 10.3389/fpsyg.2015.00715

PubMed Abstract | CrossRef Full Text | Google Scholar

Nelson, M. J., El Karoui, I., Giber, K., Yang, X., Cohen, L., Koopman, H., et al. (2017). Neurophysiological dynamics of phrase-structure building during sentence processing. Proc. Natl. Acad. Sci. U.S.A. 114, E3669–E3678. doi: 10.1073/pnas.1701590114

PubMed Abstract | CrossRef Full Text | Google Scholar

Omaki, A., Lau, E. F., Davidson White, I., Dakan, M. L., Apple, A., and Phillips, C. (2015). Hyper-active gap filling. Front. Psychol. 6:384, doi: 10.3389/fpsyg.2015.00384

PubMed Abstract | CrossRef Full Text | Google Scholar

Phillips, C. (1996). Order and Structure. Ph.D. thesis, MIT.

PubMed Abstract

Phillips, C. (2003). Linear order and constituency. Linguist. Inquiry 34, 37–90. doi: 10.1162/002438903763255922

CrossRef Full Text | Google Scholar

Phillips, C., and Lau, E. (2004). Foundational issues. J. Linguist. 40, 571–591. doi: 10.1017/S0022226704002774

CrossRef Full Text | Google Scholar

Phillips, C., and Wagers, M. W (2007). “Relating structure and time in linguistics and psycholinguistics,” in Oxford Handbook of Psycholinguistics, ed G. M. Gaskell, 739–756. (Oxford, UK: Oxford University Press)

Google Scholar

Poeppel, D., and Embick, D. (2005). “Defining the relation between linguistics and neuroscience,” in Twenty-First Century Psycholinguistics: Four Cornerstones, ed A. Cutler (Mahwah, NJ: Lawrence and Erlbaum Associates), 1–16.

Segaert, K., Mazaheri, A., and Hagoort, P. (2018). Binding language: structuring sentences through precisely timed oscillatory mechanisms. Eur. J. Neurosci. 48, 2651–2662. doi: 10.1111/ejn.13816

PubMed Abstract | CrossRef Full Text | Google Scholar

Snell, J., Meeter, M., and Grainger, J. (2017). Evidence for simultaneous syntactic processing of multiple words during reading. PLoS ONE 12:e0173720. doi: 10.1371/journal.pone.0173720

PubMed Abstract | CrossRef Full Text | Google Scholar

Sprouse, J., and Hornstein, N. (2016). “Syntax and the cognitive neuroscience of syntactic structure building,” in Neurobiology of Language, eds G. Hickok and S. Small (Amsterdam: Academic Press; Elsevier), 165–174.

Google Scholar

Staub, A. (2007). The parser doesn't ignore intransitivity, after all. J. Exp. Psychol. Learn. Mem. Cogn. 33, 550–569. doi: 10.1037/0278-7393.33.3.550

PubMed Abstract | CrossRef Full Text | Google Scholar

Staub, A. (2015). The effect of lexical predictability on eye movements in reading: critical review and theoretical interpretation. Lang. Linguist. Compass 9, 311–327. doi: 10.1111/lnc3.12151

CrossRef Full Text | Google Scholar

Staub, A., and Clifton, C. (2006). Syntactic prediction in language comprehension: evidence from either…or. J. Exp. Psychol. Learn. Mem. Cogn. 32, 425–436. doi: 10.1037/0278-7393.32.2.425

CrossRef Full Text | Google Scholar

Townsend, D. J., and Bever, T. G. (2001). Sentence Comprehension: The Interpretation of Habits and Rules. Cambridge, MA: MIT Press.

Google Scholar

Tutunjian, D., and Boland, J. E. (2008). Do we need a distinction between arguments and adjuncts? Evidence from psycholinguistic studies of comprehension. Linguist. Lang. Compass 2, 631–646. doi: 10.1111/j.1749-818X.2008.00071.x

CrossRef Full Text | Google Scholar

Keywords: merge, syntax, sentence processing, prediction, oscillations, eye tracking

Citation: Martorell J (2018) Merging Generative Linguistics and Psycholinguistics. Front. Psychol. 9:2283. doi: 10.3389/fpsyg.2018.02283

Received: 18 May 2018; Accepted: 02 November 2018;
Published: 28 November 2018.

Edited by:

Andrea Moro, Istituto Universitario di Studi Superiori di Pavia (IUSS), Italy

Reviewed by:

Cristiano Chesi, Istituto Universitario di Studi Superiori di Pavia (IUSS), Italy

Copyright © 2018 Martorell. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jordi Martorell, j.martorell@bcbl.eu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.