Skip to main content

EDITORIAL article

Front. Commun., 01 August 2024
Sec. Psychology of Language
This article is part of the Research Topic Remote Online Language Assessment: Eliciting Discourse from Children and Adults View all 11 articles

Editorial: Remote online language assessment: eliciting discourse from children and adults

  • 1School of Foreign Studies, Xi'an Jiaotong University, Xi'an, China
  • 2Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
  • 3Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
  • 4The Hong Kong Polytechnic University—Peking University Research Centre on Chinese Linguistics, Kowloon, Hong Kong SAR, China
  • 5Leibniz-Centre General Linguistics (ZAS), Berlin, Germany

1 Introduction

Being able to collect valid data is crucial for empirical science disciplines such as linguistics, developmental psycholinguistics, clinical psycholinguistics and speech, and hearing sciences. In recent years there has been an increasing use of digital devices for remote language assessments, such as online elicitation of language samples, apps for eliciting expressive lexical abilities, online questionnaires, and other digital platforms.

The COVID-19 pandemic had affected and is still affecting many lives globally, having disrupted face-to-face, in-person language assessments, and causing many researchers to conduct their language assessments online. This shift is seen in multiple disciplines and settings, with online methods of elicitation being increasingly used not only in linguistics and other disciplines but also in clinical and educational settings.

Discourse involves verbal/written narration or exchange/conversation and linguistically not only goes beyond the sentence level but also involves language skills at different levels and their integration. Assessing an individual's competence at the discourse level is an informative indicator of one's general communication and social skills, and language and cognitive development, and could be an index of one's educational outcome. Analyzing samples of narrative discourse also allows one to examine the effects of cultural practices and properties. Given the significance of assessing the discourse competence of an individual, being able to administer the assessment via remote online means allows one to collect these informative data when there are restrictions on in-person administration of assessments.

Despite the necessity of remote language assessments and the convenience they may bring to both assessors and assesses, the potential merits, limits, and problems of remote testing have not yet been systematically explored and understood. This timely Research Topic seeks contributions that mobilize new evidence and/or insightful and nuanced discussions to address questions such as: can we control online testing so that it is as good as face-to-face, in-person testing, and, if so, how? Do we have evaluative evidence of such practices, and if so, how robust is the evidence? What adaptations and concerns can and cannot be accommodated at the present? What opportunities are offered by recent technological advances? Are there certain conditions in which online testing works better or worse? Finally, how do differences between offline, in-person language assessments and online, remote assessments affect the results of testing?

The current Research Topic has two main foci: the first addresses the feasibility of assessing abilities at the discourse level (narrative or conversational) in both children and adults using remote online testing. Communicative competence at the discourse level has been considered an essential and ecologically valid component in language assessments of children and adults, for three key reasons: (1) this competence is crucial for an individual's everyday functioning and academic and social life, (2) it provides information about an individual's socio-cognitive and linguistic abilities, and (3) it is a versatile test of language skills at the levels of content, form, use and their integration. The second focus addresses the reliability of remote online testing in terms of comparing the results elicited via remote online assessments and in-person assessments.

We first give a general summary including an overview of the participants, languages, and methods featured in this Research Topic of papers, and then highlight the key results or significance of the specific papers. A short conclusion will close our Research Topic Introduction.

This Research Topic “Remote online language assessment: eliciting discourse from children and adults” intends to cover empirical articles discussing new evidence, perspective and opinion papers on issues at the conceptual-methodological interface, and methods articles presenting approaches that can offer opportunities for remote testing of discourse supported by recent technological advances. Ten papers were accepted for publication each of which has gone through the usual rigorous peer review process, and these selected papers include one perspective paper, two methods papers, and seven original research articles.

The age of participants ranged from 3 to 70 years and the number of participants per study ranged from 25 up to 4,517 participants/profiles. Five out of the seven research articles reported on bilinguals, e.g., Bosnian in the context of German, Irish-English, Mandarin-English, French-English, and two studies were dedicated to monolingual Greek and English speakers.

Four studies featured individuals with communication disorders, for example children with Autism Spectrum Disorder (Butler et al.), children with acquired reading and writing impairments (Jaecks and Jonas), adults with language impairment (Stamouli et al.), and adolescents and adults with Down Syndrome (Mattiauda et al.). A total of seven languages were featured: Bosnian, Canadian French, English, Greek, Irish, German, and Mandarin.

2 This volume

Moving onto introducing each paper in this Research Topic, we highlight each paper as follows. The perspectives paper (Jaecks and Jonas) advocated for the importance of assessing written discourse via digital means to improve social participation and digital participation for individuals with acquired reading and writing impairments and argued that remote assessment of written discourse abilities in functional communicative activities can be incorporated in teletherapy.

The two methodological papers, by Stamouli et al. and Bright et al., reported on the use of digital methods to elicit narratives from adults with(out) language impairment and from children. Stamouli et al.'s paper compared two modes of narrative elicitation methods in 10 healthy adults in a within-participants study design: remote online and in-person; and reported largely no significant differences in the narrative measures between the two elicitation methods. Bright et al. designed an app to collect story retelling samples from children. A citizen science approach was adopted to collect large samples of data and a stratified sampling framework was used to further screen participants. A total of 4,517 profiles from 599 children were collected and analyzed. Their paper demonstrated that a citizen science approach using the app is an efficient way to collect large amounts of informative research data.

The seven research papers reported on oral discourse produced by typical and atypical children, adolescents, and adults from various language backgrounds. Yang et al. examined the story-retelling skills of Mandarin-English bilingual children aged 3–6 years old (N = 25) using a remote method. They examined the effects of age and language experience on children's production of narrative macrostructure (the global organization of a story) and microstructure (the use of linguistic forms in the target language in a story). Their children showed comparable performance in macro-and micro- structures across the two languages. Age was significantly positively correlated with macrostructure in both languages, but no significant correlations were registered between language experience and narrative macrostructure and microstructure elements.

Burchell et al. compared the narrative and vocabulary measures collected by online and in-person assessments in two groups of children aged 7–12 years old: 127 English monolinguals and 78 French-English bilinguals. The two groups of children showed no differences between the two testing modes in both narrative discourse and receptive vocabulary measures. However, the authors reported that there are some modality differences between testing modes for the conversational and expository discourse measures.

Butler et al. examined the effect of remote natural language sampling on the interactions between parents and children with Autism Spectrum Disorder (ASD) at home. Naturalistic language samples from 90 dyads of parents and ASD children aged 4–7 years old were collected remotely when the interactions took place in the home. The range of activities and the relationship between activities and children's language levels were analyzed. The authors found no effect of the types of activities on the richness of language elicited and there was an association between the number of different activities and the child's language level.

Jažić et al. investigated the relationship between the history of language acquisition, current usage of language, and socio-economic status (SES) and case marking accuracy in 20 monolingual and 20 heritage Bosnian speakers aged 18–30 years old. They used the Multilingual Assessment Instrument for Narratives (LITMUS-MAIN) to elicit narrative discourse online. Heritage speakers showed significantly lower accuracy in case marking compared to the monolingual group. The use of Bosnian and the frequency of current usage, but not SES, were significant predictors of participants' case accuracy.

Mattiauda et al. made a first attempt to assess narrative retelling in adults with Down syndrome online using LITMUS-MAIN and compared the performance between 13 adults with Down syndrome aged 15–33 years old and a typically developing control group aged 4–10 years old. Participants with Down Syndrome were outperformed by the control group on measures of story structure, story comprehension, and lexical diversity, whereas there was no difference between the two groups in the total number of words. The authors concluded that remote online assessment of individuals with Down syndrome is feasible.

Zhou et al. reported the effects of structural similarities and differences between the languages, language input, and working memory on reference production in 4–6-year-old Mandarin-English bilingual preschoolers. They administered two stories using LITMUS-MAIN online and analyzed character introduction and reintroduction in the elicited oral discourse. These bilingual children showed a prolonged development of felicitous reference expressions and over-reliance on overt marking of definiteness in narratives. The frequency of felicitous reference expressions in the input was a significant predictor of the production of felicitous reference expressions and there was a modulating effect of working memory.

Antonijevic et al. assessed production and comprehension of narrative macrostructure in 30 adult Irish-English bilinguals online using LITMUS-MAIN. The authors found no difference in story structure, comprehension scores, and the overall number of Internal State Terms across languages. They highlighted that online assessment increases accessibility to participants, in particular, those in rural areas with low population density, whereas an unstable internet connection could limit the applicability of remote online assessment.

All contributions in this volume demonstrated that remote online language assessment of oral discourse is feasible for those children, adolescents and adults with and without language impairments examined. One benefit of using online assessment is increasing the accessibility to participants, which facilitates researchers in collecting large amount of language samples. Compared to in-person mode of assessment, remote online testing requires an environment well-equipped to support remote data collection such as a stable internet connection.

3 Future directions

With our Research Topic, we hope to be able to document new data featuring assessment of discourse competence in children and adults using remote and in-person experimental settings. We also hope to be able to suggest some directions for future research. These directions might be centered around investigations of the properties of child and adult (narrative) discourse looking for similarities across and differences within developmental trajectories. Cross-cultural research might shed light on the question of how specific cultural background factors shape discourse production and comprehension. One specific direction could involve methodological issues, e.g., development of new methods for remote elicitation of production and comprehension of discourse and its components, data collection for longitudinal and naturalistic data and multimodal data integration. The other direction is the development and validation of assessment instruments. This could include the integration of technology, for instance artificial intelligence, into the analyses of elicited discourse and the development of intervention practices. Digital tools for adaptive and personalized testing and automated scoring for assessment materials targeting discourse could also be one of the future Research Topics. Last but not the least are studies on practical issues dealing with the comparison of off-line and online assessment tools for elicitation and analyses of discourse. As remote online testing becomes more and more prevalent, one important issue is ethical and privacy considerations. Clear and robust protocols and guidelines are necessary to ensure the responsible conduct of research on remote online language assessment.

Author contributions

WY: Writing – review & editing, Writing – original draft, Funding acquisition, Conceptualization. AC: Writing – review & editing, Writing – original draft, Conceptualization, Funding acquisition. NG: Writing – review & editing, Writing – original draft, Conceptualization, Funding acquisition.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This study was supported by the Fundamental Research Funds for the Central Universities (SK2024025) and a research stipend by the Fritz Thyssen Foundation (40.20.0.002SL) to WY, and a research grant (P0014049; G-YW4G; Chief supervisor: AC; Co-supervisor: NG), awarded by the Research Grants Council General Research Fund, Hong Kong, to AC and NG.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: remote online, language assessment, discourse context, narratives, children, adults

Citation: Yang W, Chan A and Gagarina N (2024) Editorial: Remote online language assessment: eliciting discourse from children and adults. Front. Commun. 9:1463182. doi: 10.3389/fcomm.2024.1463182

Received: 11 July 2024; Accepted: 16 July 2024;
Published: 01 August 2024.

Edited and reviewed by: Xiaolin Zhou, Peking University, China

Copyright © 2024 Yang, Chan and Gagarina. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Wenchun Yang, wenchunyang@xjtu.edu.cn; Angel Chan, angel.ws.chan@polyu.edu.hk; Natalia Gagarina, gagarina@leibniz-zas.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.