AUTHOR=Ebenbeck Nikola , Gebhardt Markus TITLE=Simulating computerized adaptive testing in special education based on inclusive progress monitoring data JOURNAL=Frontiers in Education VOLUME=7 YEAR=2022 URL=https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2022.945733 DOI=10.3389/feduc.2022.945733 ISSN=2504-284X ABSTRACT=Introduction

Adaptive tests have advantages especially for children with special needs but are rarely used in practice. Therefore, we have investigated for our web-based progress-monitoring platform www.levumi.de of how to build adaptive tests based on existing item pools by computerized adaptive testing (CAT). In this study, we explore the requirements of item pools and necessary settings of computerized adaptive testing in special education and inclusion in order to achieve both short test length and good test accuracy.

Methods

We used existing items fitted to the Rasch model and data samples of progress monitoring tests (N = 681) for mathematics and reading to create two item pools for adaptive testing. In a simulation study (N = 4,000), we compared different test lengths and test accuracies as stopping rules with regard to an inclusive use of adaptive testing.

Results

The results show an optimal maximum test length of 37 and 24 items, with a target standard error for accuracy of 0.5. These results correspond to an average execution time of about 3 min per test.

Discussion

The results are discussed in terms of the use of adaptive testing in inclusive settings and the applicability of such adaptive tests as screenings, focusing mainly on students with special needs in learning, language, or behavior.