AUTHOR=Cohen Clara , Higham Catherine F. , Nabi Syed Waqar TITLE=Deep Learnability: Using Neural Networks to Quantify Language Similarity and Learnability JOURNAL=Frontiers in Artificial Intelligence VOLUME=3 YEAR=2020 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2020.00043 DOI=10.3389/frai.2020.00043 ISSN=2624-8212 ABSTRACT=
Learning a second language (L2) usually progresses faster if a learner's L2 is similar to their first language (L1). Yet global similarity between languages is difficult to quantify, obscuring its precise effect on learnability. Further, the combinatorial explosion of possible L1 and L2 language pairs, combined with the difficulty of controlling for idiosyncratic differences across language pairs and language learners, limits the generalizability of the experimental approach. In this study, we present a different approach, employing artificial languages, and artificial learners. We built a set of five artificial languages whose underlying grammars and vocabulary were manipulated to ensure a known degree of similarity between each pair of languages. We next built a series of neural network models for each language, and sequentially trained them on pairs of languages. These models thus represented L1 speakers learning L2s. By observing the change in activity of the cells between the L1-speaker model and the L2-learner model, we estimated how much change was needed for the model to learn the new language. We then compared the change for each L1/L2 bilingual model to the underlying similarity across each language pair. The results showed that this approach can not only recover the facilitative effect of similarity on L2 acquisition, but can also offer new insights into the differential effects across different domains of similarity. These findings serve as a proof of concept for a generalizable approach that can be applied to natural languages.