Skip to main content

EDITORIAL article

Front. Educ., 30 June 2023
Sec. Assessment, Testing and Applied Measurement
This article is part of the Research Topic Learning Analytics for Supporting Individualization: Data-informed Adaptation of Learning View all 5 articles

Editorial: Learning analytics for supporting individualization: data-informed adaptation of learning

  • 1Department of Computing Science, EdTeKLA Research Group, University of Alberta, Edmonton, AB, Canada
  • 2Centre for Research in Applied Measurement and Evaluation (CRAME), University of Alberta, Edmonton, AB, Canada
  • 3Higher Education and Development Centre, University of Otago, Dunedin, New Zealand
  • 4Department of Cognitive Science, Carleton University, Ottawa, ON, Canada

Introduction

Recent trends toward integrating digital learning environments with face-to-face classroom approaches and the emergency transition to online learning during the COVID-19 pandemic have enabled the collection of increasingly large educational data sets. The data are in various forms (structured, semi-structured, and unstructured) and hold digital artifacts that provide evidence of how people learn or what they know. These data provide an opportunity to implement an individualized approach in a broader set of technologies and instructional domains. For this opportunity to be realized, we must appropriately automate the analysis of this data and use those analyses to adapt learning by supporting machine or human decision-making.

Digital learning environments increasingly apply artificial intelligence (AI) techniques, such as machine learning and learner modeling, to analyze these data and offer opportunities for understanding complex human learning. Outside of a specific type of adaptive learning environment, called an intelligent tutoring system (ITS; VanLehn, 2006), the application of machine learning and learning analytics to student data has not progressed to offer students individual, self-directed, and adaptive learning. Within intelligent tutoring systems, these methods have generally supported adapting experiences and activities to foster learning (VanLehn, 2006). These methods also provide faculty with new ways of assessing student work. Technologies usually perform adaptation in well-defined instructional domains where correct answers to a problem can be predefined and where there is no variation in response correctness (e.g., mathematics, physics, verb conjugations). In such settings, input formats have been relatively constrained and systems only measured student knowledge on tasks that had correct answers. The technology then adapted task support or question selection based on these assessments. This type of adaptivity has led to similar learning gains to those obtained in classroom settings (VanLehn, 2011) when the system could support student needs (Beck and Gong, 2013).

Papers in this special issue reflect the field's move toward providing adaptivity and automated assessment. Some papers are focused on adaptation in more constrained environments, which has historically been the focus of adaptive educational systems and computer-adaptive testing (Bulut et al.). Other papers have focused on more open-ended tasks, including automated essay scoring (Kumar and Boulanger), student scientific thinking (Cloude et al.), and student dispositions toward knowledge and their ability to learn (Tempelaar et al.). The papers in this special issue highlight various approaches, from spacing mechanisms and scheduling content to multimodal analytics.

Bulut et al. implemented and evaluated a recommender that aims to optimize formative testing schedules. Formative tests are typically administered on a fixed schedule for all students, which is suboptimal since students learn at different rates. However, expecting teachers to adapt testing schedules to individual students is impractical. Finding ways to use data analytics to support teachers is critical given that they are notoriously overextended. Bulut et al. attempt to do so by training a test scheduling recommender using a sizeable data set (N = 745,414). This data set was obtained from a mathematics assessment system that had been used to provide formative tests to middle schoolers on a fixed schedule. The recommender balanced two competing objectives: reducing the number of tests students take and maximizing potential score gains between tests. The offline evaluation of the recommender demonstrated that these objectives could be achieved. The improved testing schedules could help avoid wasting students' time by not asking them to write tests that will not produce informative outcomes. Thus, this work provides an example of how data analytics can support both teachers and students in educational settings.

Like with the previous paper, Kumar and Boulanger consider assessment. However, they were working with less structured data (i.e., essays) so applying the types of methods used for computer-based testing (e.g., item-response theory; Ramesh and Sanampudi, 2021) or those used to create the above recommender is non-trivial. The challenge of scoring essays follows from the less structured nature of the task and the more open-ended nature of essay prompts, which can introduce challenges for human and computational assessors alike (Chan et al., 2022). Automated essay scoring (AES) uses computational methods to assess student writing; it was introduced in the 1960s but has yet to be widely used to support improvement processes for student writing. More recent research on AES can be attributed to advances in natural language processing (NLP) and machine learning approaches such as deep learning (Ramesh and Sanampudi, 2021). However, these approaches are usually opaque to both the learner and the teacher, and they ignore construct validity (Rahimi et al., 2017) so the people who need to use the output of an AES system cannot understand how it works or use the scores to help students improve their writing.

Kumar and Boulanger address the complexity of developing AES by developing rubrics and assessment items and combining these with AI techniques. The authors examined the role of explainable AI (EAI) algorithms in AES when deep learning is used. This was done to support the later goal of facilitating human trust in the AI-based system so that people and AI can work together to produce appropriate outcomes. Kumar and Boulanger did this work in the context of predicting the quality of the writing style of Grade-7 essays from the Automated Student Assessment Prize's essay data set. The authors also analyzed data on the decision-making process behind predicting rubric scores in AES, and they performed a comparative analysis of deep learning prediction models. Overall, this work shows how understanding AES, at the rubric level, can shed light on the functionality of the explainable model and how such knowledge can help improve the accuracy and utility of AES.

The value of AES research is likely to continue to grow, as researchers examine what explainable predictive models can contribute to the decision-making process that drives AES and how predictive models can be fine-tuned to improve generalizability and interpretability. Predictive models should be able to provide teachers and students with personalized, formative, and fine-grained feedback during the writing process. Future research needs to address how AES can provide just-in-time formative feedback to students instead of relying on summative assessment. As educational systems continue to migrate learning and teaching online, the field of AES combined with AI techniques can contribute to how we address online examinations and tackle issues arising from academic integrity (e.g., cheating). It could also help inform how teachers can better design assessment regimes relevant to evaluating learning in online settings.

Leveraging a complex digital learning environment, Cloude et al. used multimodal learning analytics to quantify scientific thinking in a game. As is the case for essays, the data contained considerable noise. In this paper, the challenge of measuring the intended construct is more pronounced as there has been less work on using sensor data to infer latent learner processes. Typically, modeling processes, such as scientific thinking, depend on behavioral logs alone. Cloude et al. go beyond behavioral logs by including data from sensors to augment the detection of scientific reasoning among 138 university students. This work provides insight into how scientific thinking might be captured in complex learning environments, such as games, and how modeling scientific thinking might provide more nuanced information when it involves data from sensors that indicate what learners are attending to (e.g., eye trackers). Like other recent work in less constrained environments (e.g., Cai et al., 2022), Cloude et al. found that eye-gaze data helped to assess learner performance. Their work also explains how different behavioral indicators interact with student prior knowledge and learning, creating possibilities for fine-grained adaptation when multiple sensor channels are combined to model learner processes.

Similar to the work by Cloude et al., Tempelaar et al. used analytics to characterize student engagement with instructional activities. In this work, Tempelaar et al. applied analytics to data on student dispositions about the nature of intelligence (whether it is fixed or malleable, referred to as entity mindset or incremental mindset, respectively) and the value of effort invested into instructional activities. Data from a university course in mathematics and statistics (N = 1,146) were used to identify students' dispositional profiles based on the mindset and effort constructs. Two key results emerged. First, the clusters revealed that student mindset and effort beliefs do not always converge to the constructs predicted by original theories, such as that individuals with fixed mindsets do not believe in the value of effort. This finding has implications for designing interventions targeting student mindset or effort beliefs since such interventions may need to account for more nuanced student dispositions than predicted by current theories. The second key result emerged from a subsequent analysis comparing target outcomes between the dispositions characterized by the clusters. This analysis showed that some outcomes were not aligned with predictions from mindset theory. For instance, theory predicts that students with an entity mindset will not learn effectively. However, descriptively, the entity cluster (the one with the highest belief in ability being fixed) had one of the highest scores on quizzes and exams, possibly because students in this cluster also believed in the value of effort. Thus, it may be premature to assume that students with a particular mindset will have academic outcomes related to that mindset.

The work of the Cloude and Tempelaar teams highlights the potential for using analytics to represent aspects of student behaviors and characteristics that have yet to be widely modeled or used to support adaptation. Their work creates a foundation for using data to treat complex constructs in a more nuanced manner than has previously been done. In one case, student beliefs were directly measured by asking students to respond to questionnaires. As was the case in Tempelaar et al.'s work, using an assessment may be more appropriate when the construct is expected to be relatively stable over time. In the other work, student attention and scientific thinking were inferred using sensor data. This work by Cloude et al. provides an example of stealth assessment, which refers to the unobtrusive measurement of user traits without explicitly asking them to complete an assessment (Shute and Ventura, 2013). This approach may be more appropriate for measuring dynamic learner states or when direct measurement could interfere with the learning task or process as would be expected in an educational game.

The availability of data generated in digital learning environments presents opportunities to develop new forms of learning and assessment. Learning analytics can leverage data from various sources, including online learning platforms, to provide insights into student progress, performance, and processes. Educators can use learning analytics to identify at-risk students, monitor learning outcomes, and make data-driven decisions to enhance assessment strategies and instructional design. Learning analytics and automated assessment are areas of research that leverage digital data to help teachers better understand student learning and provide support while maintaining high-quality learning outcomes.

The application of machine learning and learning analytics to mine patterns in data can contribute to the development of adaptive learning systems. These systems enable personalized feedback; data-driven adaptation; adaptive content delivery, intervention, and support; and continuous improvement. Adaptive learning environments can respond to learners' needs, enhance engagement, and promote effective learning outcomes. Automated assessment systems can identify areas of strength and weakness, allowing adaptive learning systems to tailor the learning experience to individual needs. Such systems use data, analytics, and models to dynamically adjust the content, pace, and difficulty level of learning materials, ultimately providing better support for students.

This special issue brings together papers focused on various aspects of automated assessment and learning analytics within adaptive and personalized learning environments. With the growing abundance of data in digital learning environments, it is essential for researchers to leverage this data to examine and understand the challenges students face and recommend appropriate interventions.

While automated assessments and adaptive learning are being increasingly applied, human involvement and expertise in the process remain crucial. Instructors play a vital role in interpreting and validating the outcomes of automated assessments, providing contextual feedback, and ensuring fairness and equity in the assessment process. In sum, the use of predictive models augments and enhances assessment practices instead of replacing human judgment and expertise.

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Beck, J. E., and Gong, Y. (2013). “Wheel-spinning: students who fail to master a skill,” in Artificial Intelligence in Education, eds H. C. Lane, K. Yacef, J. Mostow, and P. Pavlik (Barcelona: Springer), pp. (431–440). doi: 10.1145/3503252.3531302

CrossRef Full Text | Google Scholar

Cai, M., Zheng, B., and Demmans Epp, C. (2022). Towards supporting adaptive training of injection procedures: detecting differences in the visual attention of nursing students and experts. Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 286–294. doi: 10.1145./3503252.3531302

CrossRef Full Text | Google Scholar

Chan, K. K. Y., Bond, T., and Yan, Z. (2022). Application of an automated essay scoring engine to English writing assessment using many-facet Rasch measurement. Language Test. 8, 02655322221076025. doi: 10.1177/02655322221076025

CrossRef Full Text | Google Scholar

Rahimi, Z., Litman, D., Correnti, R., Wang, E., and Matsumura, L. C. (2017). Assessing students' use of evidence and organization in response-to-text writing: using natural language processing for rubric-based automated scoring. Int. J. Artif. Intell. Edu. 27, 1–35. doi: 10.1007./s40593-017-0143-2

CrossRef Full Text | Google Scholar

Ramesh, D., and Sanampudi, S. K. (2021). An automated essay scoring systems: a systematic literature review. Artif. Intell. Rev. 55, 1–33. doi: 10.1007/s10462-021-10068-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Shute, V. J., and Ventura, M. (2013). Stealth Assessment: Measuring and Supporting Learning in Video Games. Cambridge, MA: The MIT Press.

PubMed Abstract | Google Scholar

VanLehn, K. (2006). The behavior of tutoring systems. Int. J. Artif. Intell. Ed. 16, 227–265.

Google Scholar

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Edu. Psychol. 46, 197–221. doi: 10.1080/00461520.2011.611369

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: learning analytics, individualization, measurement, technology-enhanced learning, artificial intelligence, automated assessment, adaptive assessment, machine learning

Citation: Demmans Epp C, Daniel BK and Muldner K (2023) Editorial: Learning analytics for supporting individualization: data-informed adaptation of learning. Front. Educ. 8:1240377. doi: 10.3389/feduc.2023.1240377

Received: 14 June 2023; Accepted: 16 June 2023;
Published: 30 June 2023.

Edited and reviewed by: Gavin T. L. Brown, The University of Auckland, New Zealand

Copyright © 2023 Demmans Epp, Daniel and Muldner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Carrie Demmans Epp, cdemmansepp@ualberta.ca

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.