International Large-Scale Assessments (ILSAs) serve to monitor and benchmark education achievements to provide accurate, valid, and reliable data for participating countries to govern the development of their education systems. Reporting of standardized operating procedures and highly standardized practices in administering these studies are well documented. So too are achievement reports, both nationally and internationally, produced and provide a track record of countries’ educational attainment.
What is less pronounced in the results is how the intention of standardized processes
manifest or each participating country during data collection. Each country’s data are subject to the particular conditions of their collection, i.e. the feasibility of data collection in nationally representative samples, which populations were being sampled, by which strata, exclusions at the system and school level, and in how many languages, are only some of the questions being asked in order to understand just how accurate PIRLS data reflects reality. In short, the data collection process tells an important story about the education system of a country, less about “deficits” but much more about the specific culture, changes over time, and embedding of education into its society.
For this proposed Research Topic, we aim to bring the process of data collection to the stage. In this way, data collection in and of itself could tell us something about the context of the education system in which ILSAs are administered in terms of:
- Sampling stratification decisions and how these give shape to assessing the population;
- Decisions about system and school-level exclusion;
- Response rates and efforts to increase these within and across study cycles;
- Choices in modes of participation (i.e. electronic vs paper and pencil);
- Translation of assessment materials and efforts to quality assure these processes;
- Scoring open-ended response efforts;
Thus, this Research Topic aims to discuss the process of country-level data collection efforts, challenges, and considerations. It wants to unpack country-level accounts of data collection as a resource, data collection as a discovery, and data collection as an assumption of the operational side of ILSAs that are crucial in creating, curating, and compiling ongoing systems of educational monitoring and benchmarking. In doing so, a crucial element of not only what we collect, but how it is collected, decisions about design and implementation challenges provide a glimpse into an underrepresented area of scholarly work that tends to focus on data outcomes and results.
Keywords:
ILSC, Large Scale Assessment, Data Collection, Benchmark Education
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
International Large-Scale Assessments (ILSAs) serve to monitor and benchmark education achievements to provide accurate, valid, and reliable data for participating countries to govern the development of their education systems. Reporting of standardized operating procedures and highly standardized practices in administering these studies are well documented. So too are achievement reports, both nationally and internationally, produced and provide a track record of countries’ educational attainment.
What is less pronounced in the results is how the intention of standardized processes
manifest or each participating country during data collection. Each country’s data are subject to the particular conditions of their collection, i.e. the feasibility of data collection in nationally representative samples, which populations were being sampled, by which strata, exclusions at the system and school level, and in how many languages, are only some of the questions being asked in order to understand just how accurate PIRLS data reflects reality. In short, the data collection process tells an important story about the education system of a country, less about “deficits” but much more about the specific culture, changes over time, and embedding of education into its society.
For this proposed Research Topic, we aim to bring the process of data collection to the stage. In this way, data collection in and of itself could tell us something about the context of the education system in which ILSAs are administered in terms of:
- Sampling stratification decisions and how these give shape to assessing the population;
- Decisions about system and school-level exclusion;
- Response rates and efforts to increase these within and across study cycles;
- Choices in modes of participation (i.e. electronic vs paper and pencil);
- Translation of assessment materials and efforts to quality assure these processes;
- Scoring open-ended response efforts;
Thus, this Research Topic aims to discuss the process of country-level data collection efforts, challenges, and considerations. It wants to unpack country-level accounts of data collection as a resource, data collection as a discovery, and data collection as an assumption of the operational side of ILSAs that are crucial in creating, curating, and compiling ongoing systems of educational monitoring and benchmarking. In doing so, a crucial element of not only what we collect, but how it is collected, decisions about design and implementation challenges provide a glimpse into an underrepresented area of scholarly work that tends to focus on data outcomes and results.
Keywords:
ILSC, Large Scale Assessment, Data Collection, Benchmark Education
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.