Organized learning models (OLMs) are research-based content or cognitive frameworks that explicitly consider the relationships among precursor and successor nodes and that can be integrated with psychometric models to provide a powerful next generation of assessments. The term nodes refers to the competencies, content, practices, skills, or aspects of cognition that constitute the latent dimensions of the model. Organized learning models have also been referred to as cognitive learning models, learning progressions, learning ontologies, learning trajectories, developmental continua, and learning maps. The psychometric approaches applied to organized learning models have been referred to as cognitive diagnostic models, diagnostic classification models, and Bayesian network analysis.
Organized learning models have been used for, and studied as components of, formative and summative assessment in many research projects. In the last decade they have been increasingly applied in operational testing programs, with (e.g., the Dynamic Learning Maps Alternate Assessment) or without (e.g., Smarter Balanced assessment system) explicit integration of the learning and psychometric models. Organized learning models hold the promise of providing results that better support teaching and learning.
Despite the increased use of organized learning models, there are many unanswered research questions, such as:
-What are the best methods for validating OLMs?
-What validation evidence exists for specific OLMs?
-Is there an optimal grain size in an OLM and if so, does it vary by assessment purpose?
-What are the limits (and tradeoffs) on the number of nodes per assessment?
-How many items are needed per node?
-Should items measure one or multiple nodes?
-What are the trade-offs of using one-to-one versus many-to-many relationships among OLM nodes?
-Which psychometric models work best with OLMs and under what circumstances?
-How does data sparseness impact estimation of psychometric model parameters used with OLMs?
The goal of this special issue is to move the field of assessment forward by providing a broad view of progress made to-date and identifying unresolved issues. To this end we welcome theoretical, empirical, and policy research related to the use of OLMs in assessment. Review papers will be considered. We welcome submissions that provide analysis of the use of OLMs with underserved learners.
Organized learning models (OLMs) are research-based content or cognitive frameworks that explicitly consider the relationships among precursor and successor nodes and that can be integrated with psychometric models to provide a powerful next generation of assessments. The term nodes refers to the competencies, content, practices, skills, or aspects of cognition that constitute the latent dimensions of the model. Organized learning models have also been referred to as cognitive learning models, learning progressions, learning ontologies, learning trajectories, developmental continua, and learning maps. The psychometric approaches applied to organized learning models have been referred to as cognitive diagnostic models, diagnostic classification models, and Bayesian network analysis.
Organized learning models have been used for, and studied as components of, formative and summative assessment in many research projects. In the last decade they have been increasingly applied in operational testing programs, with (e.g., the Dynamic Learning Maps Alternate Assessment) or without (e.g., Smarter Balanced assessment system) explicit integration of the learning and psychometric models. Organized learning models hold the promise of providing results that better support teaching and learning.
Despite the increased use of organized learning models, there are many unanswered research questions, such as:
-What are the best methods for validating OLMs?
-What validation evidence exists for specific OLMs?
-Is there an optimal grain size in an OLM and if so, does it vary by assessment purpose?
-What are the limits (and tradeoffs) on the number of nodes per assessment?
-How many items are needed per node?
-Should items measure one or multiple nodes?
-What are the trade-offs of using one-to-one versus many-to-many relationships among OLM nodes?
-Which psychometric models work best with OLMs and under what circumstances?
-How does data sparseness impact estimation of psychometric model parameters used with OLMs?
The goal of this special issue is to move the field of assessment forward by providing a broad view of progress made to-date and identifying unresolved issues. To this end we welcome theoretical, empirical, and policy research related to the use of OLMs in assessment. Review papers will be considered. We welcome submissions that provide analysis of the use of OLMs with underserved learners.