Skip to main content

EDITORIAL article

Front. Educ., 08 September 2022
Sec. Assessment, Testing and Applied Measurement
This article is part of the Research Topic The Use of Organized Learning Models in Assessment View all 6 articles

Editorial: The use of organized learning models in assessment

  • 1Department of Educational Psychology, University of Kansas, Lawrence, KS, United States
  • 2Achievement and Assessment Institute, University of Kansas, Lawrence, KS, United States
  • 3Department of Teacher Education, Michigan State University, East Lansing, MI, United States

Editorial on the Research Topic
The use of organized learning models in assessment

In this editorial we posit that there is a growing recognition that educational achievement assessments can and must support student learning, that the assessment process can best support learning when it is based on an organized learning model, and that the most useful-to-learning organized learning models will include both models of learning and psychometric modeling. We end the editorial with a call for more research.

For most of its history, the focus of large-scale educational achievement testing was the assessment of learning. Starting with Scriven's (1967) differentiation between formative and summative evaluation, educational researchers and measurement experts started paying attention to formative assessment, more recently referred to as assessment for learning. This assessment purpose had long been the focus of classroom teachers and curriculum specialists but with little or none of the quantitative trappings of large-scale psychometrics.

The development of organized learning models—the ordered relationships among precursor, target, and successor skills—began at about the same time as the formalization of assessment for learning. For example, 60 years ago, Gagné et al. (1962) suggested that

a class of human tasks to be learned (like solving linear equations, adding rational numbers) can be analyzed into a hierarchy of subordinate learning sets, which mediate positive transfer of learning in a unidirectional fashion from one to another, and ultimately to the final performance. (p. 1)

These ordered relationships can be displayed as a graphical model. These graphical models of learning structure have had many names: learning set hierarchies (Gagné and Paradise, 1961), cumulative learning sequences (Gagne, 1968), learning trajectories (Simon, 1995), learning progressions (Alonzo and Steedle, 2009), progress maps (Masters and Forster, 1996), and learning maps (Kingston et al., 2016). Herein, we use the term organized learning model to refer to any and all of these models.

Thorndike (1918, p. 16), said, “Whatever exists at all exists in some amount. To know it thoroughly involves knowing its quantity as well as its quality.” In line with this we believe that the structure of the most useful models that support learning should have both qualitative (description of learning targets and the pathways that indicate precursor, target, and successor relationships) and quantitative (statistical parameters that describe the conditional probability of mastery) aspects. Without the combination of these aspects in a single model, knowledge and thus usefulness will be limited. For example, it is more useful than not to know that for some pathways a students have a lower probability of success than for other pathways, and even more useful to know those probabilities might be conditioned on specific prior learning (or other variables). Such combined models allow assessments to support high quality inferences about what students know and can do and, thus, can help teachers personalize and optimize learning for individual students. This is because the graphical structure of the model combined with statistical models such as Bayesian network analysis (Almond et al., 2007) or diagnostic classification models (Rupp et al., 2010) improves the precision of measurement.

The National Council on Measurement in Education, a professional organization whose membership consists primarily of people focused on psychometrics and large-scale assessment, recognized that there was a need for enhanced dialog among assessment specialists focused on large-scale and classroom assessment and formed a committee and conference series to this end. The theme of the first conference was “Classroom assessment and large-scale psychometrics: Shall the twain meet?” (Heritage and Kingston, 2019). Many of the talks at that conference reflected recent advances in merging models of learning and psychometric models.

Despite a growing literature about organized learning models over the past two decades, many research questions remain regarding the use of organized learning models. A sampling of such questions follows.

• Are organized learning models useful for teachers and, if so, in what ways?

• Are different forms of organized learning models and/or their presentation more or less useful to teachers?

• Is there an optimal grain size for organized learning models and, if so, does it vary with purpose?

• Are intermediate structures—local neighborhoods of closely related nodes—useful to either the understanding or use of these models?

• How can models be constructed that represent the diversity of learners?

• Under what circumstances might the parameterization of these models be invariant within relevant populations?

• What are the best approaches to validating hypothesized learning models?

In addition, empirical evidence is needed that organized learning models can be used to help students learn better. Evidence accumulated from empirical work may also push forward theoretical development around organized learning models. Challenges in the use of organized learning models may generate innovative approaches to analyzing these models. We encourage more of our colleagues to work together on these issues, and especially for experts in curriculum, instruction, and student learning and psychometric modeling to work together in addressing these questions and formulating others.

Author contributions

NK drafted the editorial and resolved the comments of the other authors, however all authors made a significant intellectual contribution to this editorial.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Almond, R. G., DiBello, L. V., Moulder, B., and Zapata-Rivera, J.-D. (2007). Modeling diagnostic assessments with Bayesian networks. J. Educ. Meas. 44, 341–359. doi: 10.1111/j.1745-3984.2007.00043.x

CrossRef Full Text | Google Scholar

Alonzo, A. C., and Steedle, J. T. (2009). Developing and assessing a force and motion learning progression. Sci. Educ. 93, 389. doi: 10.1002/sce.20303

CrossRef Full Text | Google Scholar

Gagne, R. M. (1968). Contributions of learning to human development. Psychol. Rev. 75, 177–191. doi: 10.1037/h0025664

CrossRef Full Text | Google Scholar

Gagné, R. M., Mayor, J. R., Garstens, H. L., and Paradise, N. E. (1962). Factors in acquiring knowledge of a mathematical task. Psychol. Monogr. Gen. Appl. 76, 1. doi: 10.1037/h0093824

CrossRef Full Text | Google Scholar

Gagné, R. M., and Paradise, N. E. (1961). Abilities and learning sets in knowledge acquisition. Psychol. Monogr. 75, 14. doi: 10.1037/h0093826

CrossRef Full Text | Google Scholar

Heritage, M., and Kingston, N. M. (2019). Classroom assessment and large-scale psychometrics: shall the twain meet? (a conversation with Margaret Heritage and Neal Kingston). J. Educ. Meas. 56, 670–685. doi: 10.1111/jedm.12232

CrossRef Full Text | Google Scholar

Kingston, N. M., Karvonen, M., Bechard, S., and Erickson, K. (2016). The Philosophical Underpinnings and Key Features of the Dynamic Learning Maps Alternate Assessment. Teachers College Record (Yearbook), 118. Available online at: http://www.tcrecord.org ID Number: 140311.

Google Scholar

Masters, G., and Forster, M. (1996). Progress Maps. Melbourne, VIC: The Australian Council for Educational Research; Part of the Assessment Resource Kit.

Google Scholar

Rupp, A. A., Templin, J., and Henson, R. A. (2010). Diagnostic Measurement: Theory, Methods, and Applications. New York, NY: Guilford Press.

Google Scholar

Scriven, M. (1967). “The methodology of evaluation,” in Perspectives of Curriculum Evaluation, Vol. 1, eds R. W. Tyler, R. M. Gagné and M. Scriven (Chicago, IL: Rand McNally), 39–83.

Google Scholar

Simon, M. A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. J. Res. Math. Educ. 26, 114–145. doi: 10.2307/749205

CrossRef Full Text | Google Scholar

Thorndike, E. L. (1918). “The nature, purposes, and general methods of measurements of educational products,” in Chapter II in The Seventeenth yearbook of the National Society for Study of Education. Part II. The Measurement of Educational Products,ed G. M. Whipple (Bloomington, IL: Public School Publishing Co.), 16.

Google Scholar

Keywords: organized learning models, learning trajectories, learning maps, psychometric models, graphical models, learning progressions, Bayesian network analysis, diagnostic classification models

Citation: Kingston NM, Alonzo AC, Long H and Swinburne Romine R (2022) Editorial: The use of organized learning models in assessment. Front. Educ. 7:1009446. doi: 10.3389/feduc.2022.1009446

Received: 01 August 2022; Accepted: 15 August 2022;
Published: 08 September 2022.

Edited and reviewed by: Gavin T. L. Brown, The University of Auckland, New Zealand

Copyright © 2022 Kingston, Alonzo, Long and Swinburne Romine. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Neal M. Kingston, nkingsto@ku.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.