
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
SYSTEMATIC REVIEW article
Front. Educ. , 31 March 2025
Sec. Teacher Education
Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1485821
This systematic review examines the impact of data literacy training on teachers’ and pre-service teachers’ decision-making skills. Data literacy, essential for effective teaching, involves formulating questions, collecting relevant data, and making informed decisions. The review analyzed 16 studies published between 2014 and 2023, selected from Web of Science, Scopus, and EBSCOhost. Findings indicate that structured decision-making models, authentic contexts, collaborative learning, and long-term follow-up are the most effective strategies, leading to significant improvements in self-efficacy and knowledge. Programs emphasizing real-world data applications and structured instructional support had the greatest impact on teachers’ confidence and ability to use data for pedagogical decision-making. However, a gap remains in explicitly fostering decision-making skills, as many studies prioritized data analysis over instructional application. These results highlight the need for training programs that integrate structured decision-making into data literacy frameworks, ensuring teachers not only analyze data but also apply it effectively. Future research should focus on developing standardized assessment tools and conducting long-term studies to evaluate the lasting impact of data literacy training on teaching practices.
This article presents a systematic review of the role of data literacy training for teachers and future teachers in the development of their decision-making tools. In general terms, data can be understood in the educational context as the foundation of the information that educational institutions use to support their decisions. This information may include aspects related to students, institutions, and teachers, among others (Schildkamp and Ehren, 2013). Some examples of information used for decision-making include “state achievement tests, locally developed periodic assessments, exams, questionnaires, disciplinary records, parent information, and teacher observations” (Jimerson and Wayman, 2015).
However, access to data does not guarantee its effective use in educational decision-making. The increasing generation of information in the school environment presents the challenge of transforming this data into useful knowledge to improve teaching and educational management. Currently, one of the major challenges faced by educational institutions is the high availability of data and the need to process it properly so that it contributes to improving the system, benefiting students, teachers, and administrators alike (Romero and Ventura, 2020).
Proper data management and use in education is not only an institutional challenge but has also drawn the attention of international organizations due to its impact on equity and learning quality. In this regard, the United Nations Children’s Fund (UNICEF) has promoted technical support in different countries to strengthen the efficient and transparent use of data, enabling better decision-making that helps reduce inequity gaps and improve learning processes (Jarousse et al., 2019). However, technological advancements have accelerated the process of digitalization in education, leading to an increasing volume of data that teachers must manage to improve teaching and learning (Mandinach and Gummer, 2016; Mandinach and Schildkamp, 2021). Access to large volumes of data represents a significant challenge, as not all teachers have the necessary training to interpret and integrate this data effectively into pedagogical decision-making (Espin et al., 2017; Reeves and Chiang, 2018).
The ability to effectively use data in educational decision-making is not only a challenge but also an opportunity to enhance teaching and student learning outcomes. Studies suggest that when teachers receive adequate data literacy training, they are better prepared to analyze student achievement data, adjust instructional strategies, and provide targeted training that improve learning (Filderman et al., 2021; Oslund et al., 2020).
Despite the proliferation of digital platforms and learning management systems (LMS) that generate detailed information on student performance, many teachers struggle to interpret this data, which can lead to misinterpretations and ineffective pedagogical decisions (Barragán-Giraldo et al., 2024; Vanlommel and Schildkamp, 2018). Furthermore, the increasing use of artificial intelligence and machine learning tools in education has added complexity to data-driven decision-making, requiring a higher level of digital and statistical literacy among teachers (Bit et al., 2024; Harry and Sayudin, 2023). Nevertheless, the lack of specific training in data literacy in teacher education programs and ongoing professional development has created a skills gap that hinders the effective integration of data into educational practice (Lores Gómez et al., 2019; Raffaghelli, 2020). Jimerson (2014) observes that some teachers perceive data more as a control mechanism than as a tool for improvement, generating resistance and distrust, especially when data is used to evaluate performance without sufficient context or support.
In response to these challenges, various previous reviews have examined the use of data in education to understand its impact and improve its integration into teaching practice. However, most of these reviews have taken a broad approach, focusing on the conceptualization of the data literacy construct or the overall effectiveness of training. Henderson and Corry (2021) identified that teacher training in data literacy has prioritized assessment literacy over a comprehensive approach to decision-making. Meanwhile, Ansyari et al. (2020) examined the impact of professional development training in data literacy, highlighting improvements in teacher self-efficacy but without specifically evaluating their influence on decision-making processes in pedagogical practice.
This review advances the understanding of the impact of data literacy training by taking a more focused approach to its relationship with teachers’ and future teachers’ ability to make informed decisions. Building on previous studies that have explored data literacy in a broader sense, this review examines how data literacy training has been designed to enhance decision-making skills, considering factors such as attitudes, self-efficacy, and knowledge of data use. In doing so, it not only expands knowledge on the effects of data literacy on educational practice but also contributes to a more precise characterization of the mechanisms that facilitate or hinder its incorporation into teachers’ decision-making processes.
To achieve this objective, a research question has been formulated following the PICOS strategy, which facilitates the selection of terms for searching articles in academic databases (Gallagher and Melnyk, 2019). The research question guiding this study is: What are the reported effects of data literacy training on the decision-making capacity of teachers and future teachers in response to existing challenges in the use of educational data?
The international reality regarding data use in the educational system varies greatly between countries, closely linked to the educational policies developed at the local level. Schildkamp and Ehren (2013) analyze the reality of various nations and their experiences in using data within educational systems.
One of the recognized experiences is that of the USA with the enactment of the No Child Left Behind Act (NCLB) in 2001, which focused on providing various types of data to determine whether schools achieve “Adequate Yearly Progress” (AYP) in order to identify problems and seek solutions (Mandinach and Jackson, 2012). On the other hand, the Netherlands has incorporated the Pupil Monitoring System, which aims to support schools in the individual monitoring and support of students. This system is considered a promising means to improve student performance (Schildkamp and Ehren, 2013). Canada has policies that require schools to use data while providing support, such as in Ontario, where the Managing Information for Student Achievement/Professional Network Centers (MISA/PNC) program is in place to develop competencies for data use (Campbell and Levin, 2009). In the case of South Africa, there is the South African Monitoring System for Primary Schools (SAMP) since 2002, which seeks to provide feedback to schools to improve their educational processes. In this case, research continues on the best strategies to optimize the feedback provided (Laher et al., 2019). Other studies, such as those from England, New Zealand, and Belgium continue to investigate their support systems for data use and the necessary improvements for the benefit of the educational system (Ozga, 2016; Shewbridge et al., 2011; Sinnema et al., 2020).
As previously noted, technological advancements and the increasing complexity of educational environments have highlighted the need for teachers to make informed decisions based on data. Schildkamp and Kuiper (2010) have argued that data use in education involves a systematic analysis of available information within schools, serving as a foundation for improving teaching, curriculum design, and institutional performance. This process not only includes data interpretation but also the application of findings to develop pedagogical innovations, the implementation of concrete improvement actions, and the subsequent evaluation of their impact.
Data-driven decision making, in turn, allows teachers to manage student heterogeneity to tailor instruction and intensify training, fostering greater achievement when data is used formatively and comes from multiple sources, both quantitative and qualitative (Mandinach et al., 2006; Marsh, 2012; Hoogland et al., 2016; Peters et al., 2021; Filderman and Toste, 2022). Although this concept has been widely promoted, its implementation in schools remains challenging. Dodman et al. (2021) point out that one of the main challenges in implementing DDDM is its technocratic use, where data is often interpreted without considering the structural and sociocultural factors influencing academic outcomes. To ensure a meaningful impact on teaching, they propose a more critical approach, in which data is not only used to measure performance but also to analyze and improve educational conditions.
Additionally, operational and cultural barriers persist. In an analysis of teachers’ experiences with data use, Jimerson (2014) found that educators often associate data with external control rather than instructional improvement. Teachers in his study reported feelings of distrust and resistance, particularly when data were used for performance evaluations without sufficient context or support. Furthermore, participants highlighted constraints such as inaccessible systems and limited time for collaborative data analysis, which hindered their ability to integrate data into pedagogical decision-making effectively. In this context, it is crucial to strengthen teacher training in data literacy and promote spaces for collaborative reflection, ensuring that data is interpreted in a way that supports more critical and contextualized decision-making.
The Data Quality Campaign states that a teacher with data literacy possesses the knowledge and skills to access, interpret, act on, and communicate data (Conn et al., 2022), enabling improvements in their instruction (Gambell, 2004; Gearhart and Osmundson, 2008). Within this field, the Mandinach and Gummer (2016) model is one of the most widely recognized frameworks for data literacy in teaching (Data Literacy for Teaching, DLFT). The authors define data literacy as the ability to “transform information into actionable knowledge and instructional strategies,” structuring it into five interrelated phases, from problem identification to evaluating the impact of data-based decisions.
A key aspect of this model is the transformation of information into a decision, as it involves converting data analysis into concrete pedagogical actions. Teachers must interpret information, adjust instruction, and define strategies tailored to students’ needs. Additionally, they must consider the context in which decisions are made, integrating curricular, sociocultural, and resource-related factors.
While the Mandinach and Gummer model has been a key reference, recent perspectives have expanded its scope. Kennedy-Clark and Reimann (2022) argue that data literacy in teachers does not follow a linear process but evolves through experience and prior knowledge. Based on the TPACK (Technological Pedagogical Content Knowledge) framework, they suggest that data literacy emerges from the interplay between technology, pedagogy, and disciplinary knowledge. They also incorporate Knowledge in Pieces (KiP) theory, which holds that teachers acquire data competencies in fragmented ways, and Rhizomes of Knowledge theory, which emphasizes its dynamic and non-hierarchical development.
On the other hand, Yang et al. (2022) present a more structured approach focused on assessing teachers’ data literacy, identifying three key dimensions: data awareness, data knowledge and culture, and technological data literacy. Unlike Mandinach and Gummer (2016) who describe a step-by-step process for teaching, Yang et al. (2022) propose an assessment framework that diagnoses teachers’ competency levels in data use and their integration into educational practice.
Despite the growing recognition of data literacy in teaching, its integration into teacher training remains limited. Mandinach and Friedman (2015) emphasize that most teachers do not receive systematic training in data use, despite its importance in evidence-based education. Additionally, studies such as Ansyari et al. (2020) have identified both benefits and limitations in data literacy training programs. These programs can enhance teachers’ ability to use data, boost confidence in pedagogical decision-making, and strengthen technological tool usage in the classroom. However, challenges persist, including teacher resistance to data use, lack of integration with the school curriculum, and difficulties in applying theoretical knowledge to practice. Furthermore, training programs are often short-term and lack continuous support, reducing their long-term impact.
Although many studies refer to interventions in data literacy, this review will use the concept of training programs, as defined by Kristjansdottir et al. (2021), who describe training as structured learning aimed at developing specific skills, while interventions pursue broader, long-term changes. This distinction allows for the inclusion of various studies related to data literacy training, including workshops, courses, and capacity-building initiatives.
Given its importance, this review will examine the reported effects of data literacy training on both in-service and pre-service teachers, particularly in their capacity to make instructional decisions.
For this systematic review, the protocol was registered with the International Platform of Registered Systematic Review and Meta-Analysis Protocols (INPLASY) in February 2023, and the steps outlined by the PRISMA protocol were followed. The flow diagram, presented in Figure 1, includes both the initial search and the updated search conducted in 2024.
It was considered as inclusion criteria that the articles focused on teachers and pre-service teachers associated with any educational level, specifically investigating the effect of an training in data literacy for decision-making and explicitly assessing its impact through a validated or structured instrument to measure the effect of the training quantitatively. The selected studies had to be quantitative or mixed-method; in the case of mixed-method studies, only quantitative information was extracted and analyzed.
On the contrary, studies on data mining, predictive modeling, learning analytics, or big data in education were excluded, as well as studies that solely used qualitative approaches to inquire about the effect of the training on data literacy for decision-making or that did not examine the impact of the training in a measurable way. Additionally, theoretical articles, literature reviews, and conference abstracts without empirical data were excluded.
No restrictions were applied based on year of publication, language, or country of origin to increase the chances of finding articles with relevant information for this search.
The search strategy was first conducted in September 2022 and was based on the Web of Science, Scopus, and EBSCOhost databases. The search included broad terms for the training (data literacy) and for the outcomes (decision making), as well as incorporating education as the context in which the training takes place. These terms were expanded using the ERIC thesaurus and with the help of a librarian. The concepts were combined using the Boolean operators “OR” and “AND”. In line with the PICOS framework, the population was broadly defined as educators, which was reflected in the inclusion of education-related terms, while no specific comparator was used to avoid restricting the search and to allow for diverse methodological approaches. Additionally, only quantitative and mixed-methods studies were included to ensure methodological consistency in analyzing the effects of data literacy on decision-making. The search strategy was developed for Scopus and was modified according to the specificities of each database: [TITLE-ABS-KEY (“data literacy” OR “data literacy training” OR “data literacy for teaching” OR “data literacy program”)] AND [TITLE-ABS-KEY (“decision making” OR “data driven decision making” OR “data based decision making” OR “data use” OR “decision-making” OR “decision making skills” OR “information utilization”)] AND [TITLE-ABS-KEY (“education” OR “higher education” OR “college*’ OR “universit*” OR “tertiary education” OR “preschool education” OR “preschool” OR “primary education” OR “secondary education” OR “secondary schools” OR “high school”)].
The researchers carried out the screening process using the previously indicated terms with the support of the free Rayyan platform. From this initial search, 298 articles were obtained across the three databases. After removing 132 duplicates, which were reviewed by the authors, 166 articles remained for title and abstract screening. Based on the inclusion and exclusion criteria, 151 articles were excluded, and 18 were considered for full-text reading. The inclusion/exclusion criteria were applied again, and 5 articles were eliminated. Finally, 13 articles were selected for analysis in this review.
An updated search, following the same strategy used in 2022 and utilizing the same databases (Scopus, Web of Science, and EBSCOhost), was conducted in April 2024. This additional search identified records in Scopus (n = 18), Web of Science (n = 35), and EBSCOhost (n = 33), totaling 51 records after removing 35 duplicates. These 51 records were screened, resulting in the exclusion of 45 records. Six reports were retrieved for eligibility assessment, of which two were excluded for being qualitative studies and one for not directly addressing an training. Finally, three new studies were included in the review, bringing the total to 16 studies in this updated review.
To avoid bias and validate the selection process, two authors independently screened the titles and abstracts in a blinded manner using Rayyan, meaning that each reviewer was unaware of the other’s decisions during the initial screening phase. In contrast, the third author adjudicated on articles in which there was no agreement, mainly resolving discrepancies regarding whether the training could be classified as data literacy initiatives, particularly in cases where the focus on data literacy was not explicitly stated. The three authors reviewed the articles for full-text reading, and there was one hundred percent agreement on those that should be incorporated.
For data collection, the authors first defined the categories and elements to be extracted from the articles and developed a data extraction matrix using Excel spreadsheets. Since the population of this review corresponds to teachers and pre-service teachers, some categories were specifically assigned to a single group based on relevance for further analysis. Then, the authors divided their tasks: one author was in charge of extracting all the information associated with the general characterization of the population and the studies, the types of training, and the main results, while another author was mainly responsible for the extraction and organization of the statistical data. In contrast, the third author conducted a general review of the collected data.
Regarding the data collected, general study information was retrieved first. The number of participants was considered for descriptive purposes, except for Ebbeler et al. (2016), who reported results at the school level. For the types of training included, aspects of duration, modality, main activities, content, and data types were considered. Regarding the measures of results, all instruments that sought to measure progress in aspects associated with the data literacy of the participants were included, ideally those that had been previously validated. Nevertheless, this was not considered an exclusion criterion since this review includes an analysis of the quality of the instruments used in the following section.
For the risk of bias assessment of included studies, was used the McGill Mixed Methods Assessment Tool (MMAT) for qualitative, quantitative, and mixed empirical studies (Nha Hong et al., 2018). First, two of the authors made an independent review of the quality of the articles, then the discrepancies were resolved with the support of the third author. For each of these types of studies, five quality questions are included, which can be answered with a “yes,” “no,” and “cannot tell,” and each study is rated individually. When performing the analysis, it was found that 6% (N = 1) of the studies have 1 point, 12% (N = 2) have 2 points, 31% (N = 5) have 3 points, 44% (N = 7) have 4 points, and 6% (N = 1) have 5 points for methodological quality (Figure 2).
Regarding general aspects, the sixteen studies were published between 2014 and 2023 (see Table 1). 50% (N = 8) were conducted in the USA, 19% (N = 3) in the Netherlands, and 31% (N = 5) in Germany. A total of 636 pre-service teachers, 1712 teachers, and 95 schools participated. The studies include training in primary, secondary, and higher education. In the case of pre-service teachers, participants were students in primary education, secondary education, and special education. In all studies, the female gender predominated. 25% of the sample had a control group; the rest controlled the impact of the training with pre- and post-assessments. It should be noted in Table 1 that in the case of Ebbeler et al. (2017), the participants correspond to schools, and in the study by Wurster et al. (2023), only the main study is considered, not the pilot.
Regarding the training methods, a wide variety of methodologies were evident (see Table 2), ranging from 15-min training to 2 years in duration. Among the training, 62% of the cases used only assessment data, while the remaining 38% used various types of data. 53% of the training were conducted face to face, 13% were hybrid, and 13% were asynchronous online. The remaining studies did not report this information.
Overall, no one training is run most frequently in the studies; nevertheless, being conducted by the same authors, the Data Chat training (Piro et al., 2014; Piro and Hutchinson, 2014) and Data in Five by Four (D5 × 4) (Reeves and Chiang, 2018, 2019) are repeated two times within the studies.
Concerning the didactics of the training, in all cases, the training has multifactorial components, which can be seen in Figure 3. Most training programs incorporated teamwork, the formation of data analysis groups, the use of technological tools for information management, and the guidance of experts or facilitators. Additionally, many strategies included the analysis of real or simulated data, the development of hypotheses, the implementation of improvement measures based on findings, and the evaluation of their impact. Some training programs stood out for incorporating structured decision-making models, evidence-based strategies, and the use of student monitoring systems. Overall, these training programs not only fostered the development of analytical and statistical skills but also encouraged pedagogical reflection and the integration of findings into teaching practice.
On the other hand, regarding the contents considered within the training, some elements are interrelated, and others mark differences between one training and another. The selection of content for the training was based on different theoretical approaches, one of them being the Eight steps of the data use training (Schildkamp and Ehren, 2013) which posits a cyclical and iterative process for the implementation of improvement measures as a result of data use (Ebbeler et al., 2017; Kippers et al., 2018). Merk et al. (2020) adapted this approach by synthesizing it into four steps (data collection, transformation, reduction, and interpretation). Another of the most relevant theoretical bases within the studies analyzed is the conceptual framework Mandinach and Gummer (2016) raised about data literacy for teachers (DLFT). These authors provide definitions and classifications that guided the studies of Neugebauer et al. (2020), Reeves and Chiang (2018), and van Geel et al. (2017). For their part, Miller-Bains et al. (2022) used as the basis for their training an adaptation of the online course “Introduction to Data Wise” from Harvard University (Boudett et al., 2013). The other studies have less specific theoretical bases or need to describe them.
To measure the effects of the training, 16 instruments were applied, with varying characteristics. 25% of the sample used the Data-Driven Decision-Making Efficacy and Anxiety Inventory/Amended (3D-MEA). This 17-item scale assesses teachers’ confidence in their ability to implement data-driven decisions and includes four subscales: (1) efficacy for data identification and access (EDIA), (2) efficacy for data technology use (EDTU), (3) efficacy for data interpretation, evaluation, and application (EDIEA), and (4) data-driven decision-making anxiety (DDDMA). A later version divides EDIEA into analysis and interpretation (EDAI) and application to instruction (EADI).
Conceptions of Assessment (COA III) was also used by 25% of the sample. This 27-item tool, measures teachers’ perceptions of how assessments should be used in schools and classrooms. It focuses on four key constructs: (1) assessment makes schools accountable, (2) assessment makes students accountable, (3) assessment improves education, and (4) assessment is irrelevant. All other instruments were used much less frequently, with some appearing in only a single study.
Variations in measuring the impact of training generally show positive results (see Table 3). In this comparative review, the p-value indicates the significance of the differences between pre- and post-training measurements. Effect size was reported in 50% of the studies. Among these, one study showed a Very Large effect (Reeves and Chiang, 2019), three reported a Large result (LaLonde et al., 2023; Reeves and Chiang, 2018; Reeves and Chiang, 2019), and another three presented a Medium to Large result (Kippers et al., 2018; Merk et al., 2020; Reeves and Chiang, 2019). Notably, the studies that reported the largest results primarily focused on knowledge and self-efficacy, while none showed major results in the attitudes category.
The primary results do not follow a clear temporal trend. The associated didactic strategies include theoretical reflection, collaborative work, authentic contexts, data analysis, structured decision-making models, external counseling, and interactive learning experiences with targeted feedback. Significant results differ from medium ones by incorporating performances in authentic contexts, where participants engaged with real or simulated data closely resembling the information they would analyze in professional settings. For example, some training used actual student performance reports, standardized test results, and data visualizations similar to those used in educational decision-making processes (Reeves and Chiang, 2019; Kippers et al., 2018).
Collaborative work was also a key feature in training with larger effects, fostering peer discussions and joint problem-solving. Studies implementing this approach often involved group activities where participants collectively interpreted data, debated possible instructional responses, and refined their analytical skills through structured discussions (Kippers et al., 2018; Merk et al., 2020). Additionally, theoretical reflection played a crucial role in training that achieved significant improvements. These studies encouraged participants to critically examine their own data literacy skills, assess the implications of data use in education, and connect their learning to broader theoretical frameworks of data-driven decision-making. Some training integrated explicit discussions on the psychological and institutional barriers to data use, helping participants develop a more nuanced understanding of how data literacy translates into effective teaching practices (Reeves and Chiang, 2018; LaLonde et al., 2023).
To organize the results, three categories were established, as shown in Table 3: (1) Studies that measure attitudes, beliefs, and value regarding data use; (2) Studies that measure perceptions of self-efficacy and skills for data use; and (3) Studies that measure knowledge about data use. Additionally, the findings related to the skills needed to transform information into a decision are emphasized.
The analysis of the literature reveals that 63% of the reviewed studies use scales that measure this category. In the study by Ebbeler et al. (2016), questionnaires were used to evaluate skills and attitudes toward data use, finding that the group with data teams showed significantly greater differences compared to the group without data teams, with a medium effect size. On the other hand, Miller-Bains et al. (2022) applied the COA III to pre-service teachers, who significantly reduced the belief that assessment is irrelevant.
In contrast, Neugebauer et al. (2020) assessed interest in data use using the amended 3D-MEA and found no significant differences post-training, attributing this to the already high pre-training averages. Similarly, Reeves and Chiang (2018) observed a significant decrease in anxiety toward data use in both in-service and pre-service teachers.
The study by Reeves and Honig (2015) applied the SEDU scale, which includes the subscales “data effectiveness for pedagogy” and “data attitudes,” and found no significant differences, also attributed to high pre-training scores. This study also used the abbreviated COA-III, where the scales of assessment validation and student accountability showed significant results.
In the case of Abrams et al. (2020), the training was a professional development program designed to enhance data literacy and efficacy. However, no significant improvements were observed in attitudes toward data use. The study highlights that school-level factors, such as administrative expectations and district policies, may have influenced the results. Similarly, Bolhuis (2019) examined the use of data teams in teacher education, finding improvements in data skills and use, but no significant changes in attitudes toward data use.
Among the studies that showed greater effects, Ebbeler et al. (2016) and Miller-Bains et al. (2022) implemented methodologies that included collaborative learning structures and reflective exercises. Ebbeler et al. (2016) structured the training around data teams, where participants engaged in guided discussions, collaborative data analysis, and iterative decision-making cycles. This model emphasized peer collaboration and problem-solving, which may have contributed to improved attitudes toward data use.
On the other hand, Miller-Bains et al. (2022) applied a low-cost, short-term training consisting of a workshop followed by structured reflection prompts, allowing pre-service teachers to connect assessment practices with instructional decision-making over time. Additionally, the follow-up period of several months provided opportunities for participants to apply their learning, which may have supported the observed changes in attitudes.
Another key effect measured was self-efficacy and skills in data use, with 69% of the reviewed studies reporting results in this category. The 3D-MEA was commonly used to assess self-efficacy, showing significant improvements in multiple studies. Abrams et al. (2020) found a sustained increase across all 3D-MEA subscales and in the Teacher Survey Data Literacy Scale, with effects persisting 1 year post-training. The training included structured professional development, collaborative data use activities, and ongoing coaching, which may have contributed to these long-term effects. Similarly, LaLonde et al. (2023) reported large improvements in both instructional decision accuracy and confidence, using a structured decision-making model for special education teacher candidates.
In contrast, Bolhuis (2019) observed gains only in data use for school development, but no significant changes in data use to improve instruction, suggesting a stronger institutional rather than instructional focus. Hebbecker et al. (2022) also showed moderate improvements in data use but found no significant differences in decision-making between the training and control groups. Their approach provided training and materials but lacked the structured decision-making protocols seen in more effective training. Likewise, Wurster et al. (2023) found that the training led to positive changes in self-efficacy across all dimensions of data-driven decision-making (DDDM).
Among the studies showing the largest effects, Reeves and Chiang (2019) implemented an asynchronous online data literacy training, engaging participants in structured, interactive exercises using external, standardized assessment data. The study found large improvements in self-efficacy and data use practices in schools. Likewise, Reeves and Chiang (2018) examined a series of online training and found medium to large improvements in self-efficacy and reduced anxiety about data use.
Most studies showed positive results, particularly those that incorporated structured decision-making models, longitudinal follow-ups, and collaborative learning structures. For example, LaLonde et al. (2023) demonstrated that a well-defined decision-making framework enhanced both confidence and practical skills in applying data to instructional decisions. Similarly, training that included ongoing coaching or extended measurement periods, such as Abrams et al. (2020) and Reeves and Chiang (2019), resulted in more sustained improvements over time.
On the other hand, Bolhuis (2019) and Hebbecker et al. (2022) showed more limited effects compared to other studies. While Bolhuis’ training improved data use for school-wide decisions, it did not lead to significant changes in classroom-level instructional practices. This study focused on institutional decision-making rather than providing explicit training on instructional applications. Similarly, Hebbecker et al. (2022) provided general training and resources but lacked a structured framework guiding teachers in applying data-driven decision-making processes, which may have contributed to the lack of significant improvements in decision-making outcomes.
Finally, 50% of the studies measured knowledge about data use. Ebbeler et al. (2017) used the Data Literacy Skills Knowledge Test, finding significant improvements in participants from data teams, with a small to medium effect size. Kippers et al. (2018) also employed the knowledge test developed by Ebbeler, observing moderate to large improvements, particularly in the ‘take instructional action’ component, which refers to educators’ ability to translate data analysis into pedagogical decisions.
Merk et al. (2020) conducted a randomized controlled trial with pre-service teachers in Germany, using a data literacy test, and found a large and significant effect of the training. Similarly, LaLonde et al. (2023) evaluated knowledge development in special education teacher candidates through the use of curriculum-based measurement (CBM) charts, reporting significant improvements in the accuracy of instructional decisions and data use knowledge?.
In contrast, van Geel et al. (2017) used the SMS data literacy test to assess knowledge gained from a 2-year intensive training. While significant increases were found in areas related to software use and data analysis, these were not directly linked to decision-making processes, suggesting that while participants improved their technical skills, the training may not have sufficiently emphasized their application to instructional practice.
Among the studies with the most positive effects, common methodological features included structured and interactive training that emphasized decision-making models (LaLonde et al., 2023), randomized controlled trials with direct application to classroom contexts (Merk et al., 2020), and team-based approaches with external support and iterative learning processes (Kippers et al., 2018; Ebbeler et al., 2017). These findings suggest that training incorporating collaborative learning, structured frameworks for data use, and long-term engagement are more likely to lead to significant improvements in knowledge about data use.
Focusing on the assessment of data-driven decision-making skills, this section examines the instruments that have incorporated the evaluation of this skill to a greater or lesser extent within the reviewed studies, highlighting those that have specifically focused their measurements on instructional decision-making skills.
In the category of attitudes, beliefs, and value in data use, the Teacher Survey Data Literacy Scale (Abrams et al., 2020) includes 50% of its items related to decision-making skills, while the Instructional Decisions Questionnaire by Hebbecker et al. (2022) is entirely dedicated (100%) to assessing this competence in instruction.
For self-efficacy and skills in data use, the 3D-MEA (Abrams et al., 2020; Reeves and Chiang, 2018; Reeves and Chiang, 2019) assigns 50% of its items to decision-making skills, making it a key tool for evaluating this competence. The study by LaLonde et al. (2023) also focuses entirely on decision-making skills, using a structured framework based on CBM (Curriculum-Based Measurement) charts to guide teachers in interpreting student performance data and planning instructional actions.
In knowledge assessments, the Data Literacy Skills Knowledge Test by Ebbeler et al. (2017) includes only 8% of its items on decision-making skills, while the Data Literacy Test by Kippers et al. (2018) covers 17%. On the other hand, the scales by Merk et al. (2020) and van Geel et al. (2017) contain no items related to decision-making skills, focusing instead on general data analysis and statistical comprehension.
The studies by Hebbecker et al. (2022) and LaLonde et al. (2023) stand out for exclusively assessing decision-making skills. Hebbecker et al. (2022) conducted an training in which teachers received training on data use, support materials, and instructional recommendations. As a result, the training explained 4.5% of the change in instructional decision-making skills and 8.3% of the total variability in this skill. Although some improvement was observed, no significant differences were found between teachers who participated in the training and those in the control group.
In contrast, LaLonde et al. (2023) and Reeves and Chiang (2018) Reeves and Chiang (2019) implemented highly structured training that systematically guided participants through data interpretation and instructional decision-making. LaLonde’s model used CBM charts, yielding large effect sizes (Hedges’ g = 1.23 for decision accuracy, g = 1.65 for confidence in decision-making). Similarly, the Reeves and Chiang training incorporated structured protocols, interactive activities, and immediate feedback, showing very large (d = 1.39) and large (d = 0.88) effect sizes for self-efficacy and data use practices in the 2019 study, and medium to large effects (d = 0.55–0.89) in 2018.
On the other hand, Merk et al. (2020) and van Geel et al. (2017) did not assess decision-making as part of data literacy. Merk et al. (2020) focused on technical skills, conducting a 6-h training on data collection, transformation, and statistical analysis, without including its application to instructional decision-making. Meanwhile, van Geel et al. (2017) conducted a 2-year training that assessed educators’ ability to analyze and interpret student performance data, but without examining how these insights were applied in instructional settings. While these studies contributed to a better understanding of data literacy, they do not provide evidence on whether improved skills translated into better decision-making in educational contexts.
This systematic review provides a comprehensive overview of the different approaches to training teachers and pre-service teachers in data literacy, emphasizing the importance of such training in improving informed decision-making. The findings indicate that structured decision-making models, training in authentic contexts, promotion of collaborative work, and long-term follow-up programs are the most effective strategies, aligning with previous studies (Marsh, 2012; Reeves and Honig, 2015; Wayman and Jimerson, 2013). Among in-service teachers, studies incorporating collaborative work within data teams show positive outcomes, enhancing knowledge and awareness regarding data use in teaching practice (Schildkamp, 2019).
Most reviewed studies report significant improvements in self-efficacy and knowledge of data use, which is particularly relevant since, according to Bandura (1997), self-efficacy is a key predictor of teaching performance. However, not all studies calculated effect sizes, limiting direct comparisons of their impact. Therefore, those studies that did report effect sizes stand out in this analysis as they provide a more precise and quantifiable measurement of outcomes. Nonetheless, it is also important to consider studies that, while not reporting effect sizes, assessed their effectiveness through statistical significance tests, as these also provide valuable insights into the impact of data literacy training programs.
The methodologies that implemented clear and systematic decision-making structures demonstrated the largest and most sustained effects over time. Among these, the use of curriculum-based measurement (CBM) progress monitoring charts and structured asynchronous training protocols stood out for significantly improving the accuracy of pedagogical decisions and teachers’ confidence in using data.
The study by LaLonde et al. (2023) implemented a CBM-based model that allowed pre-service teachers to analyze real data and apply a structured decision-making model. The observed effects were large, indicating a significant improvement in instructional decision-making accuracy. Similarly, Reeves and Chiang (2019) used an asynchronous methodology with interactive exercises and standardized data analysis, yielding very large effects on self-efficacy and decision-making skills.
These findings align with Mandinach and Gummer (2016), who argue that data literacy must include a structured approach to decision-making, emphasizing that simply teaching data analysis techniques is insufficient without explicit training in data-driven decision-making skills. Similarly, Kennedy-Clark and Reimann (2022) emphasize that well-structured models are essential to ensure that teachers not only interpret data effectively but also transfer this knowledge into real educational decision-making. These authors warn that open-ended and less structured approaches can lead to inconsistent data interpretations, limiting their application in improving teaching practices.
However, studies such as Bolhuis (2019) and Hebbecker et al. (2022) did not achieve significant effects on decision-making, likely because they employed more flexible and less structured methodologies compared to those that showed stronger results.
Programs that included the analysis of real or simulated data in authentic settings were more effective in transferring knowledge to teaching practice. Reeves and Chiang (2018) Reeves and Chiang (2019) implemented training with standardized data in interactive exercises, significantly improving confidence and accuracy in decision-making. Kippers et al. (2018) designed an intervention emphasizing the practical application of data use in instructional decision-making, achieving moderate to large effects.
These findings align with various studies suggesting that using authentic data in teacher training fosters a better integration of data analysis into professional practice, increasing motivation, knowledge retention, and critical thinking development (Elvianasti et al., 2021; Fredricks et al., 2004; Herrington et al., 2014; Wayman and Jimerson, 2013). However, other studies have found that simulated data may not have the same impact unless explicitly linked to real teaching scenarios (Neugebauer et al., 2020).
Collaborative work emerged as a key component in programs with the greatest effects on self-efficacy and knowledge application on data use. Kippers et al. (2018) included team-based data analysis and peer feedback, leading to significant improvements in knowledge and confidence in using data. Merk et al. (2020) integrated small-group collaborative learning, achieving medium to large effects.
However, Schildkamp and Poortman (2015) found that teamwork is only effective when teachers already have a basic level of data literacy; otherwise, collaboration may lead to misinterpretations of data.
Studies that incorporated post-training follow-up achieved more sustained effects over time. Abrams et al. (2020) implemented a multi-phase intervention with continuous coaching, resulting in long-term retention of data literacy skills.
These findings reinforce those of Mandinach and Gummer (2016), who argue that data literacy cannot be taught in isolated sessions but must be integrated into an ongoing learning and application process. Similarly, Kennedy-Clark and Reimann (2022) emphasize that the effectiveness of teacher training programs in data literacy largely depends on the continuity and structure of the learning process. According to these authors, a lack of follow-up and reinforcement after initial training can lead to a rapid decline in the application of acquired knowledge, reinforcing the need for training models with evaluation phases and sustained support over time.
The ability of teachers to make informed pedagogical decisions based on data is a central element in data literacy, as it enables collected and analyzed information to translate into effective improvements in teaching and learning. However, teacher training approaches vary significantly in how they integrate this competency. Some studies explicitly prioritized decision-making skill development, while others focused on teaching statistical and methodological concepts without directly evaluating their application in educational practice.
Two studies that specifically assessed instructional decision-making were those by LaLonde et al. (2023) and Hebbecker et al. (2022). Both examined how teachers use data to guide their instruction. However, while the structured decision-making model based on CBM progress monitoring charts in LaLonde et al. (2023) yielded large effects on decision accuracy and confidence, Hebbecker et al. (2022) showed more modest effects, with no significant differences between trained and untrained teachers.
This difference may be due to several factors. One key element is the design of the intervention. LaLonde et al. (2023) employed a highly structured approach, guiding participants systematically through data analysis and instructional decision-making. This ensured that teachers not only acquired knowledge but also gained confidence in applying data to instructional planning and execution.
In contrast, Hebbecker et al. (2022), although focused on decision-making, used a less structured approach, requiring teachers to independently transfer their learning to practice. This may have limited the training’s impact, as teachers lacked a clear framework for systematically integrating data into instructional decision-making. As suggested by Kennedy-Clark and Reimann (2022), training programs that lack structured models may lead to inconsistent data interpretations and hinder knowledge transfer to real-world educational contexts.
Furthermore, Mandinach and Gummer (2016) argue that data literacy should not be confined to analytical skills alone but must explicitly integrate decision-making as a fundamental component. However, in many teacher training programs, this component remains secondary to an emphasis on statistical tools and conceptual data comprehension. This leaves a gap in teachers’ preparation to effectively apply data in instruction, which may explain why some studies report significant knowledge improvements but not necessarily practical application in teaching.
The contrast between these studies highlights the need for continued research on how decision-making capacity is developed within data literacy and what conditions are necessary for teachers to integrate these skills into their instructional practice. While understanding key data concepts is essential, this alone is insufficient unless it translates into improved informed decision-making. Therefore, instructional decision-making must be central to data literacy training, rather than being secondary to statistical and conceptual skills without a clear link to educational practice.
This review underscores the need to broaden the concept of data literacy in teacher education by integrating instructional decision-making as a core component. Traditionally, data literacy has been defined as the ability to collect, analyze, and interpret information to improve teaching. However, the findings indicate that teaching data analysis alone does not ensure effective pedagogical decision-making. As noted by Mandinach and Schildkamp (2021), a major challenge is overcoming misconceptions that prioritize statistical and conceptual skills over practical application. For data literacy to have a real impact, decision-making must be a central element in teacher training models.
Studies that showed the strongest effects in developing instructional decision-making skills, such as LaLonde et al. (2023) and Reeves and Chiang (2019), employed structured methodologies that guided teachers through data analysis and its application in teaching. These programs helped participants build confidence in interpreting and using data for instruction, emphasizing the importance of structured learning. In contrast, Hebbecker et al. (2022) also focused on decision-making but yielded more modest results, possibly due to less structured training, reinforcing the need for progressive, active methodologies that allow teachers to apply their knowledge in real contexts.
From a practical perspective, training programs should prioritize structured experiences beyond data interpretation to strengthen decision-making skills. The most effective programs included well-defined decision-making models, case-based learning, and real-world applications. Additionally, teacher confidence in data use appears to be a crucial factor. While Reeves and Chiang (2019) achieved significant improvements through structured protocols and authentic data interaction, Hebbecker et al. (2022) did not find significant differences, likely due to less structured training.
These findings emphasize that teacher training in data literacy should not be limited to statistical knowledge but must incorporate methodologies that enhance decision-making skills. To maximize impact, training programs should foster confidence and autonomy in data use, ensuring that data literacy translates into real improvements in teaching practice.
A key limitation of this review is the variability in measurement instruments, making direct comparisons between studies difficult. Some focused on instructional decision-making, while others assessed data literacy more broadly, limiting precision in measuring training effects.
Another limitation is the lack of longitudinal studies. While many reported short-term improvements, there is little evidence on whether these effects persist over time or translate into sustained changes in teaching practice. Without follow-up, it remains unclear whether teachers continue applying data literacy skills in real educational settings.
Additionally, many studies did not assess decision-making in depth, making it difficult to isolate its specific impact. Given its importance, future research should prioritize standardized measurement instruments, long-term follow-ups, and a clearer focus on decision-making skills. Strengthening these areas will help design more effective training programs that integrate data literacy into teaching practice.
This review highlights that data literacy training for teachers must go beyond analytical skills to explicitly develop instructional decision-making. The most effective methodologies included structured decision-making models, real-world application, collaborative learning, and long-term follow-up, all of which enhanced teachers’ confidence and accuracy in using data for instructional planning.
Programs that incorporated authentic data use facilitated better integration into teaching practice, reinforcing the importance of data literacy as a continuous learning process rather than a one-time intervention. Additionally, self-efficacy emerged as a critical factor in effective data use, emphasizing the need for practical, structured experiences to build teachers’ confidence in making data-driven decisions.
The original contributions presented in this study are included in this article/supplementary material, further inquiries can be directed to the corresponding author.
FS-R: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Visualization, Writing – original draft, Writing – review and editing. CG-P: Formal Analysis, Methodology, Writing – original draft, Writing – review and editing. JL-N: Conceptualization, Methodology, Supervision, Writing – original draft, Writing – review and editing.
The author(s) declare that no financial support was received for the research and/or publication of this article.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Abrams, L. M., Varier, D., and Mehdi, T. (2020). The intersection of school context and teachers’ data use practice: Implications for an integrated approach to capacity building. Stud. Educ. Eval. 69:100868. doi: 10.1016/j.stueduc.2020.100868
Ansyari, M. F., Schildkamp, K., and Smit, M. (2020). Tracking the process of data use professional development interventions: A systematic review. Educ. Res. Rev. 31:100362. doi: 10.1016/j.edurev.2020.100362
Barragán-Giraldo, D. F., Pirela Morillo, J. E., Riaño-Díaz, J. A., and Munevar Vargas, S. L. (2024). Plataformas digitales y prácticas pedagógicas de docentes: Promesas no cumplidas. Edutec. Rev. Electrónica Tecnol. Educ. 87, 56–73. doi: 10.21556/edutec.2024.87.3067
Bit, D., Biswas, S., and Nag, M. (2024). The impact of artificial intelligence in educational system. Int. J. Sci. Res. Sci.Technol. 11, 419–427. doi: 10.32628/IJSRST2411424
Bolhuis, E. (2019). The development of data use, data skills, and positive attitude towards data use in a data team training for teacher educators. Stud. Educ. Eval. 60, 99–108. doi: 10.1016/j.stueduc.2018.12.002
Boudett, K., City, E., and Murnane, R. (2013). Data Wise, Revised and Expanded Edition: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning, 1st Edn. Cambridge, MA: Harvard Education Press.
Campbell, C., and Levin, B. (2009). Using data to support educational improvement. Educ. Assess. Eval. Accountability 21, 47–65. doi: 10.1007/s11092-008-9063-x
Conn, C. A., Bohan, K. J., Bies-Hernandez, N. J., Powell, P. J., Sweeny, S. P., Persinger, L. L., et al. (2022). Expected data literacy knowledge and skills for early career teachers: Perspectives from school and district personnel. Teach. Teach. Educ. 111:103607. doi: 10.1016/j.tate.2021.103607
Dodman, S. L., Swalwell, K., DeMulder, E. K., Stribling, S. M., and View, J. L. (2021). Critical data-driven decision making: A conceptual model of data use for equity. Teach. Teach. Educ. 99:103272. doi: 10.1016/j.tate.2020.103272
Ebbeler, J., Poortman, C. L., Schildkamp, K., and Pieters, J. M. (2016). Effects of a data use training on educators’ use of knowledge and skills. Stud. Educ. Eval. 48, 19–31. doi: 10.1016/j.stueduc.2015.11.002
Ebbeler, J., Poortman, C. L., Schildkamp, K., and Pieters, J. M. (2017). The effects of a data use training on educators’ satisfaction and data literacy. Educ. Assess. Eval. Accountability 29, 83–105. doi: 10.1007/s11092-016-9251-z
Elvianasti, M., Meitiyani, M., Maesaroh, M., Irdalisa, I., and Yarza, H. N. (2021). Building students’ critical thinking skills through authentic learning by designing eco-brick social campaigns. AL-ISHLAH:J. Pendidikan 13, 1841–1847. doi: 10.35445/alishlah.v13i3.389
Espin, C., Wayman, M. M., Deno, S. L., McMaster, K. L., and de Rooij, M. (2017). Data-based decision-making: Developing a method for capturing teachers’ understanding of CBM graphs. Learn. Disabil. Res. Pract. 32, 8–21. doi: 10.1111/ldrp.12123
Field, A. (2013). Discovering statistics using IBM SPSS statistics, 4th Edn. Thousand Oaks, CA: Sage Publications.
Filderman, M. J., and Toste, J. R. (2022). Effects of varying levels of data use to intensify a multisyllabic word reading training for upper elementary students with or at risk for reading disabilities. J. Learn. Disabil. 55, 393–407. doi: 10.1177/00222194211048405
Filderman, M. J., Toste, J. R., Didion, L., and Peng, P. (2021). Data literacy training for K–12 teachers: A meta-analysis of the effects on teacher outcomes. Remedial Special Educ. 43, 328–343. doi: 10.1177/07419325211054208
Fredricks, J. A., Blumenfeld, P. C., and Paris, A. H. (2004). School engagement: Potential of the concept. State of the Evidence. Rev. Educ. Res. 74, 59–109.
Gallagher, F. L., and Melnyk, B. M. (2019). The underappreciated and misunderstood PICOT question: A critical step in the EBP process. Worldviews Evidence-Based Nurs. 16, 422–423. doi: 10.1111/wvn.12408
Gambell, T. (2004). Teachers working around large-scale assessment: Reconstructing professionalism and professional development. Engl. Teach. Pract. Critique 3, 48–73.
Gearhart, M., and Osmundson, E. (2008). Assessment Portfolios as Opportunities for Teacher Learning: Cresst Report 736. Avaialable online at: https://files.eric.ed.gov/fulltext/ED502624.pdf (accessed August 21, 2024).
Harry, A., and Sayudin. (2023). Role of AI in education. Injurity Interdisciplinary J. Hum. 2, 260–268.
Hebbecker, K., Förster, N., Forthmann, B., and Souvignier, E. (2022). Data-based decision-making in schools: Examining the process and effects of teacher support. J. Educ. Psychol. 114, 1695–1721. doi: 10.1037/edu0000530
Hedges, L. V. (1981). Distribution theory for Glass’s estimator of effect size and related estimators. J. Educ. Stat. 6, 107–128. doi: 10.3102/10769986006002107
Henderson, J., and Corry, M. (2021). Data literacy training and use for educational professionals. J. Res. Innov. Teach. Learn. 14, 232–244. doi: 10.1108/JRIT-11-2019-0074
Herrington, J., Parker, J., and Boase-Jelinek, D. (2014). Connected authentic learning: Reflection and intentional learning. Australian J. Educ. 58, 23–35. doi: 10.1177/0004944113517830
Hoogland, I., Schildkamp, K., van der Kleij, F., Heitink, M., Kippers, W., Veldkamp, B., et al. (2016). Prerequisites for data-based decision making in the classroom: Research evidence and practical illustrations. Teach. Teach. Educ. 60, 377–386. doi: 10.1016/j.tate.2016.07.012
Jarousse, J.-P., Prouty, R., and Rooke, B. (2019). Data Must Speak Formative Evaluation Final Report. New York: Unicef.
Jimerson, J. (2014). Thinking about data: Exploring the development of mental models for “data use” among teachers and school leaders. Stud. Educ. Eval. 42, 5–14. doi: 10.1016/j.stueduc.2013.10.010
Jimerson, J., and Wayman, J. C. (2015). Professional learning for using data: Examining teacher needs & supports. Teach. Coll. Rec. 117:36. doi: 10.1177/016146811511700405
Kennedy-Clark, S., and Reimann, P. (2022). Knowledge types in initial teacher education: A multi-dimensional approach to developing data literacy and data fluency. Learn. Res. Pract. 8, 42–58. doi: 10.1080/23735082.2021.1957140
Kippers, W. B., Poortman, C. L., Schildkamp, K., and Visscher, A. J. (2018). Data literacy: What do educators learn and struggle with during a data use training? Stud. Educ. Eval. 56, 21–31. doi: 10.1016/j.stueduc.2017.11.001
Kristjansdottir, O. B., Vågan, A., Svavarsdottir, M. H., Børve, H. B., Hvinden, K., Duprez, V., et al. (2021). Training interventions for healthcare providers offering group-based patient education: A scoping review. Patient Educ. Counseling 104, 1030–1048. doi: 10.1016/j.pec.2020.12.006
Laher, S., Fynn, A., and Kramer, S. (2019). Transforming Research Methods in the Social Sciences: Case Studies from South Africa (First). Johannesburg: Wits University Press.
LaLonde, K., VanDerwall, R., Truckenmiller, A. J., and Walsh, M. (2023). An evaluation of a decision-making model on preservice teachers’ instructional decision-making from curriculum-based measurement progress monitoring graphs. Psychol. Schl. 60, 2195–2208. doi: 10.1002/pits.22863
Lores Gómez, B., Sánchez Thevenet, P., and García Bellido, M. R. (2019). La formación de la competencia digital en los docentes. Profesorado. Rev. Currículum Formación Profesorado 24, 235–259. doi: 10.30827/profesorado.v23i4.11720
Mandinach, E. B., and Friedman, J. M. (2015). How can schools of education help to build educators’ capacity to use data? A systemic view of the issue. Teac. Coll. Rec. 117:50.
Mandinach, E. B., and Gummer, E. S. (2016). What does it mean for teachers to be data literate: Laying out the skills, knowledge, and dispositions. Teach. Teach. Educ. 60, 366–376. doi: 10.1016/j.tate.2016.07.011
Mandinach, E. B., Honey, M., and Light, D. (2006). A Theoretical Framework for Data-Driven Decision Making. Annual Meeting of AERA. Washington, DC: AERA.
Mandinach, E., and Jackson, S. (2012). Transforming Teaching and Learning Trough Data-Driven Decision Making. Thousand Oaks, CA: Corwin.
Mandinach, E., and Schildkamp, K. (2021). Misconceptions about data-based decision making in education: An exploration of the literature. Stud. Educ. Eval. 69:100842. doi: 10.1016/j.stueduc.2020.100842
Marsh, J. A. (2012). Training promoting educators’ use of data: Research insights and gaps. Teach. Coll. Rec. 114:48. doi: 10.1177/016146811211401106
Merk, S., Poindl, S., Wurster, S., and Bohl, T. (2020). Fostering aspects of pre-service teachers’ data literacy: Results of a randomized controlled trial. Teach. Teach. Educ. 91:103043. doi: 10.1016/j.tate.2020.103043
Miller-Bains, K. L., Cohen, J., and Wong, V. C. (2022). Developing data literacy: Investigating the effects of a pre-service data use training. Teach. Teach. Educ. 109:103569. doi: 10.1016/j.tate.2021.103569
Neugebauer, S. R., Morrison, D., Karahalios, V., Harper, E., Jones, H., Lenihan, S., et al. (2020). A collaborative model to support K-12 pre-service teachers’ data-based decision making in schools: Integrating data discussions across stakeholders, spaces, and subjects. Action Teach. Educ. 43, 85–101. doi: 10.1080/01626620.2020.1842821
Nha Hong, Q., Pluye, P., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., et al. (2018). Mixed Methods Appraisal Tool (MMAT) Version 2018 User Guide. Available online at: http://mixedmethodsappraisaltoolpublic.pbworks.com/ (accessed August 21, 2024).
Oslund, E. L., Elleman, A. M., and Wallace, K. (2020). Factors related to data-based decision-making: Examining experience, professional development, and the mediating effect of confidence on teacher graph literacy. J. Learn. Disabil. 54, 243–255. doi: 10.1177/0022219420972187
Ozga, J. (2016). Trust in numbers? Digital education governance and the inspection process. Eur. Educ. Res. J. 15, 69–81. doi: 10.1177/1474904115616629
Peters, M. T., Forster, N., Hebbecker, K., Forthmann, B., and Souvignier, E. (2021). Effects of data-based decision-making on low-performing readers in general education classrooms: Cumulative evidence from six training studies. J. Learn. Disabil. 54, 334–348. doi: 10.1177/00222194211011580
Piro, J. S., Dunlap, K., and Shutt, T. (2014). A collaborative Data Chat: Teaching summative assessment data use in pre-service teacher education. Cog. Educ. 1. doi: 10.1080/2331186X.2014.968409
Piro, J. S., and Hutchinson, C. J. (2014). Using a data chat to teach instructional interventions: Student perceptions of data literacy in an assessment course. New Educ. 10, 95–111. doi: 10.1080/1547688X.2014.898479
Raffaghelli, J. E. (2020). Datificación y educación superior: Hacia la construcción de un marco para la alfabetización en datos del profesorado universitario. Rev. Int. Invest. Educ. Pedagogía 13, 177–202. doi: 10.30827/profesorado.v23i4.11720
Reeves, T. D., and Chiang, J. L. (2018). Online training to promote teacher data-driven decision making: Optimizing design to maximize impact. Stud. Educ. Eval. 59, 256–269. doi: 10.1016/j.stueduc.2018.09.006
Reeves, T. D., and Chiang, J. L. (2019). Effects of an asynchronous online data literacy intervention on pre-service and in-service educators’ beliefs, self-efficacy, and practices. Comput. Educ. 136, 13–33. doi: 10.1016/j.compedu.2019.03.004
Reeves, T. D., and Honig, S. L. (2015). A classroom data literacy training for pre-service teachers. Teach. Teach. Educ. 50, 90–101. doi: 10.1016/j.tate.2015.05.007
Romero, C., and Ventura, S. (2020). Educational data mining and learning analytics: An updated survey. Wiley Interdisciplinary Rev. Data Mining Knowledge Discov. 10:e1355. doi: 10.1002/widm.1355
Schildkamp, K. (2019). Data-based decision-making for school improvement: Research insights and gaps. Educ. Res. 61, 257–273. doi: 10.1080/00131881.2019.1625716
Schildkamp, K., and Ehren, M. (2013). “From “intuition” to “data”-based decision making in Dutch secondary schools?,” in Data-based Decision Making in Education: Challenges and Opportunities, eds K. Schildkamp, M. K. Lai, and L. Earl (Berlin: Springer), 49–67. doi: 10.1007/978-94-007-4816-3_4
Schildkamp, K., and Kuiper, W. (2010). Data-informed curriculum reform: Which data, what purposes, and promoting and hindering factors. Teach. Teach. Educ. 26, 482–496. doi: 10.1016/j.tate.2009.06.007
Schildkamp, K., and Poortman, C. L. (2015). Factors influencing the functioning of data teams. Teach. Coll. Rec. 117, 1–42. doi: 10.1177/016146811511700403
Shewbridge, C., Hulshof, M., Nusche, D., and Stoll, L. (2011). School Evaluation in the Flemish Community of Belgium, OECD Reviews of Evaluation and Assessment Education. Paris: OECD.
Sinnema, C., Daly, A. J., Liou, Y. H., and Rodway, J. (2020). Exploring the communities of learning policy in New Zealand using social network analysis: A case study of leadership, expertise, and networks. Int. J. Educ. Res. 99:101492. doi: 10.1016/j.ijer.2019.10.002
van Geel, M., Visscher, A. J., and Teunis, B. (2017). School characteristics influencing the implementation of a data-based decision making training. Schl. Effect. Schl. Improvement 28, 443–462. doi: 10.1080/09243453.2017.1314972
Vanlommel, K., and Schildkamp, K. (2018). How do teachers make sense of data in the context of high-stakes decision making? Am. Educ. Res. J. 56, 792–821. doi: 10.3102/0002831218803891
Wayman, J. C., and Jimerson, J. B. (2013). Teacher needs for data-related professional learning. Stud. Educ. Eval. 42, 25–34. doi: 10.1016/j.stueduc.2013.11.001
Wurster, S., Bez, S., and Merk, S. (2023). Does learning how to use data mean being motiv ated to use it? Effects of a data use training on data literacy and motivational beliefs of pre-service teachers. Learn. Instruct. 88:101806. doi: 10.1016/j.learninstruc.2023.101806
Keywords: data literacy, decision-making, teacher training, educational training, systematic review, instructional decisions, technology
Citation: Sandoval-Ríos F, Gajardo-Poblete C and López-Núñez JA (2025) Role of data literacy training for decision-making in teaching practice: a systematic review. Front. Educ. 10:1485821. doi: 10.3389/feduc.2025.1485821
Received: 24 August 2024; Accepted: 17 March 2025;
Published: 31 March 2025.
Edited by:
Christine Beaudry, Nevada State College, United StatesReviewed by:
Jane McIntosh Cooper, University of Houston-Clear Lake, United StatesCopyright © 2025 Sandoval-Ríos, Gajardo-Poblete and López-Núñez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Fabián Sandoval-Ríos, ZmFiaWFuLnNhbmRvdmFsQHVuYWIuY2w=
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.