- College of Social Science and Humanities, Arba Minch University, Arba Minch, Ethiopia
Assessment for learning practice and learning improvement are the two vital variables in this study. This article explores primary school teachers’ assessments for learning practices for student learning improvement. The participants (n = 242) were selected through a cluster random sampling techniques for a questionnaire survey from the target primary schools to meet the research objective of this study. Among the total participants 15 teachers for an in-depth interview, and five for informal conversations were randomly selected through purposive sampling. The data collected through closed-ended questionnaire were analyzed using mean, standard deviation, analysis of variance and post-hoc methods. To analyze the qualitative data acquired through interview and informal conversation, thematic verbal descriptions were employed. Conversely, the findings indicated that the primary school teachers had highly positive classroom environments practice, moderate positive learning intentions and success, feedback in assessment practices, and self- and peer-assessment practices toward learning assessment. Although, the findings from the quantitative data showed that the primary school teachers had high and moderate assessments for learning practice, the interviews revealed that they had low confidence in learning practice assessments owing to challenges faced during practice. These overall challenges relate to transparency, experience, training, school problems, and preference reliance. Finally, based on the study’s implications, recommendations are made for future directions of studies that will allow better comprehension of the assessment for learning practice in relation to the students’ learning improvement.
1. Introduction
Teachers find it challenging to achieve student learning improvement without effective assessment for learning practice at all levels of educational settings. According to Hargreaves (2005) and Kangaslampi et al. (2022), assessment as part of classroom activities is a fundamental improvement to promote learning and achievement. If this is the case, then assessment for learning is essential for enhancing students’ learning improvement. Improvement is the progress made when learners know and understand what they need to do to improve and are given time to take the necessary action to improve their learning process. Recent trends in educational assessment studies (Black and Wiliam, 2006, 2009; Bordoh et al., 2015; Sintayehu, 2016; Carless and Boud, 2018) indicated that assessment for learning has a significant impact in enhancing students’ learning progress, in both sides positively and negatively. According to the Lysaght et al. (2017) study, assessment for learning diagnose and identify students’ learning needs and assist throughout the educational system in a cycle of ongoing improvement. For this purpose, teachers’ assessment literacy is critical for advancing classroom teaching and learning (Pattalitan, 2016; Winarso, 2018; Gebremariam and Gedamu, 2022). Since students’ learning is measured by assessment tasks, the teachers’ primary focus could be on the investigation of assessment for learning knowledge and practice (Brown and Abeywickrama, 2010). In addition, Sultana (2019) identified on teachers’ assessment knowledge, assessment methods, and feedback provision of classroom assessment tasks are interested elements of teachers’ assessment literacy. As a result, teachers are expected to be aware the various faces of assessment methods; assessment of learning and assessment for learning to facilitate an appropriate assessment approaches that can improve the effective future of learning progress (Molloy et al., 2020; Hussain et al., 2021).
Although primary school teachers’ assessment for learning practice is believed to be important for instruction, learning, and teaching improvement, studies (Popham, 2017; Hussain et al., 2021; Yan and Pastore, 2022) have shown that teachers regard the assessment for learning in daily classroom practice as critical for students’ learning improvement From this perspective, assessing the learning practices of the study area, particularly the targeted primary schools in the Gamo Zone, requires intensive exploration to gain deeper insights. In light of practices, this can be accounted for as an input to improve educational quality and resilience construction. Most teachers who enrolled in primary schools teaching and participated in this study were expected at least to have diplomas from public and private colleges. To this end, the current study aimed to explore whether primary school teachers have assessment practices for learning. To attain this research objective, two basic research questions that led to such attainments were set as follows.
1. What seem primary school teachers’ practices of assessment for learning for students’ learning improvement?
2. Are there any potential challenges that hinder the practice of assessment for learning?
Besides, the study findings will direct the assessment for learning practices in the actual learning progress reliance among primary school teachers by identifying the sustainability in practice and underlying the potential challenges at the grassroots level.
2. Conceptual frameworks on assessment for learning practice for learning improvement
Several theories, models, and conceptual frameworks have been employed to investigate learning assessment considering a student-centered learning approach. This study systematically presents essential models and conceptual frameworks that would serve as a base for exploring the assessment of learning practice and its sustainability for students’ learning improvement per the envisioned objective. The models and conceptual frameworks are critical in revealing state-of-the-art exploration, where the basic research process is conceptually linked with the findings indicating its empirical insights.
2.1. Assessment for learning practice
Assessment for learning has been characterized as not test but as a way of learning progress (Serin, 2015; Subheesh and Sethy, 2020). Assessment is used to aid improving educational goals in development especially it is connected to help the students’ learning progress during their school learning journey. Birenbaum et al. (2015), Black and William (1998) and Hailay (2017) described that students and teachers may use the information gained through assessment practice to determine their learning and teaching progress. In addition, teachers are expected to gain information about the learning progress of students as well as their teaching process (Kangaslampi et al., 2022; Yan and Carless, 2022) and ensure the students’ learning improvement by students’ involvement in self- and peer-assessment, portfolios, conferencing, and student-centered activities in general (Lysaght et al., 2017; Boud and Dawson, 2021).
Besides, according to Boud and Dawson (2021), Gebremariam and Gedamu (2022), Hailay (2017), and Teshome (2016) in the educational system in Ethiopia teachers’ assessment for learning practices seems too questionable. From the assessment for learning practice expected to ensure the students’ learning improvement through day to day classroom practice (Bordoh et al., 2015; Popham, 2017; Mussawy et al., 2021; Yan and Pastore, 2022). This implies that assessment is used as a means of learning progress for improvement (Molloy et al., 2020; Hussain et al., 2021). Similarly, assessment in general is a process for obtaining information to make decisions about students learning progress, teachers teaching approaches and out of the class educational matters, and in particular, assessment for learning is also being considered as an ingredient condiment in improving the quality of learning outcomes. To sum up, assessment for learning practice is deals about the implementation of assessment for learning principles that included under the constructivist and social constructivist theories (Birenbaum et al., 2015; Yan and Pastore, 2022). This employs the practices of such assessments for the purpose of individual learning development; for example, self- and peer- assessment, formative and diagnostic assessments and applying feedback progress as well as pedagogic style, student-teacher interaction, self-reflection, internal motivation and different ways of assessment practices (Maclellan, 2017). This type of assessment is conducted to learn with mastery of educational goals in mind (Lysaght et al., 2017).
2.2. Theories of learning improvement
Teachers’ perspectives on the assessment of learning practice may be on the right track for measuring long-term learning improvement. According to Maclellan (2017), learning is a new experience established by students’ own continuous efforts. It is impossible to assess students’ current competency unless the learning progress is accompanied by genuine practices based on theories of constructivism or social constructivism (Boud et al., 2018; Kangaslampi et al., 2022). According to Farrel (2017), meaningful assessments cannot be separated from learning and teaching. Teachers use these standards to weigh their own educational approaches and the teaching and learning implementation processes. Students evaluate their own and their peers’ learning abilities and provide feedback (Boud et al., 2018; Winarso, 2018; Brown et al., 2019; Capan et al., 2020). Students are expected to adopt a problem-solving approach to their learning (Crisp, 2012; Boud and Dawson, 2021; Monteiro et al., 2021). Conversely, teachers must have pedagogical, content conception, and application skills in addition to evaluating learning activities (Desalegn, 2014; Hailay, 2017; Carless and Boud, 2018; Hussain et al., 2021). There is a need for training that allows teachers and students to compare their perspectives on the learning context to the aims and objectives of the learning lesson (Lysaght and O’Leary, 2013; Nieminen et al., 2023).
As Popham (2017) stated, “assessment is the instrument of learning improvement” (p. 3). Based on this, the Ethiopian educational policy also has a mission to develop productive citizens (Sintayehu, 2016; Gedamu and Shewangezaw, 2020). It encourages teachers to employ an assessment for learning through continuous assessment practice at all levels of educational sectors in the country, although its implementation is similar to traditional assessment types (Black and Wiliam, 2006; Sintayehu, 2016; Teshome, 2016). As Sintayehu (2016) described the policy of Ethiopian educational system designs as students to be dynamic citizens; with capable of performing meaningful tasks in the context of learning and teaching. On the other side, teachers must help their students become proficient in performing the tasks they will encounter in their careers. Based on this, the assessment for learning is the medium of students’ learning progress to support their learning development (Earl, 2003). Not only are these, the teachers teaching skill and assessing practice important variable in learning improvement (Earl, 2003; Imran, 2012; Capan et al., 2020). As a result, assessment for learning is a continuous process that occurs between teaching instruction and learning progress (Schildkamba et al., 2020).
2.3. Assessment for learning and learning improvement
In the instructional process effective assessment for learning practice results the sustainable students’ learning improvement. This assessment for learning practice has its own objective, to assist students’ learning progress in improving their learning, updating their cognitive process of learning and develop the abilities through assessment for learning concepts and practices (Teshome, 2016; Mussawy et al., 2021). Previous studies (e.g., Popham, 2008, 2017; Black and Wiliam, 2009; Pattalitan, 2016; Farrel, 2017; Lysaght and O’Leary, 2017; Sultana, 2019; Gedamu and Shewangezaw, 2020; Schildkamba et al., 2020) have confirmed that this type of assessment is an ongoing process of learning and teaching to safeguard that educational objectives and specific goals of subject matters are met, respectively. Although assessment for learning has many objectives, its implementation is complex (Sultana, 2019; Nieminen et al., 2023). According to Lysaght et al. (2017) and Popham (2008) studies teachers’ perspectives on assessment for learning and classroom practices are as follows: (i) there is a lack of awareness of concepts and practices of assessment for learning, which makes it difficult to implement assessment in large class sizes with high noise levels; (ii) there is a mismatch between the teachers’ assessment for learning beliefs and orientations (they assume as time-consuming and difficult to implement in classrooms), and assessment for learning principles that help students’ learning progress (Brown et al., 2019; Schildkamba et al., 2020). Other challenges include teachers’ preference for objective question tests to save time and reduce workload (Mussawy et al., 2021). The identified potential challenges influence teachers’ assessments of learning practices for long-term student learning improvement (Carless and Boud, 2018; Brown et al., 2019).
According to Black and Wiliam (2006), Mohamed et al. (2021), Yan and Carless (2022) and Yan and Pastore (2022), assessment for learning has two primary goals: (1) communicating students learning progress to the respected subject matter teachers and administrators; (2) providing immediate feedback to the students for their learning progress to help them by identifying the gaps from the intended set of learning outcomes. Black and Wiliam (2009) revealed that assessment for learning improves both academic performance and social development as students interact with their peers to share learning goals. Assessment for learning is the process of continuously collecting and analyzing data on the development of student learning (Popham, 2017; Molloy et al., 2020).
In the teaching and learning instructional process assessment of learning is inextricably linked (Brown et al., 2019; Sultana, 2019; Kangaslampi et al., 2022). According to Gebremariam and Gedamu (2022), there are three models in education: (1) curricula, syllabi, student books or references; (2) teaching and learning instruction and (3) assessment methods to implement the instruction of teaching and learning. Furthermore, the center of the learning model is assessment practice leading to learning improvement and forming the framework for classroom learning practice (Hussain et al., 2021). Consequently, there is a need for research on primary school teachers’ assessment for learning practice for long-term learning improvement, which may influence personal and contextual factors and educational values (Lysaght et al., 2017; Boud and Dawson, 2021; Mohamed et al., 2021). Previous findings no longer apply to Ethiopian primary school teachers as their assessments of learning practice could vary from those of their peers worldwide. There are similar concepts but different contexts (Sintayehu, 2016; Lysaght et al., 2017; Mohamed et al., 2021; Monteiro et al., 2021). More research is needed to determine whether primary school teachers’ assessment for learning practices indicates a learning environment in Ethiopian schools, consistent with previous research findings. As a result, this research aimed to learn more about primary school teachers’ assessment for learning to improve learning in the Gamo Zone.
Generally, a teacher’s desired assessment of learning practice for long-term learning improvement results from a positive interlink between the two focus points. Therefore, this study aims to address (i) Primary school teachers’ assessment for learning practice and (ii) the challenges in assessment for learning practice.
3. Research methods
In line with the previously reviewed related literature, this study used a convergent parallel mixed methods research design to answer the research questions raised in this study. Using these both mixed methods it is a better to understand the research problems than either approach alone and improves the validity and credibility of the findings. Furthermore, the mixed approaches are proper to triangulate facts and opinions on the research questions response and to develop insights in to the basic exploration of the study motivation.
3.1. Participants: Sampling procedures
The sample size was determined first because there were too many elementary schools and teachers in the study area overall. In the beginning of the study, the Wereda educational offices in Gamo zone were selected using the Yamane (1967) formula. The sample size was calculated using the formula n = N/1 + N(e2), where n denotes the sample size, e2 denotes the level of precision (0.5%) indicating the maximum variability, and 1 denotes the probability that the event will occur. Because no prior research was conducted to serve as a baseline for the study, this formula is preferred for applications with a 5% error margin and a 95% level of confidence (Rose et al., 2015). Furthermore, this formula is superior because it assumes a normal distribution and is suitable for determining an appropriate sample size (50%) with the highest possible rate of response in light of no previous research data concerning both the compositional and locational focuses of the study. There were 72 primary schools in the catchment area. Because of the large number of schools in the Gamo zone, which includes 14 Wereda districts and four city administrations, sampling was necessary. Accordingly, after receiving official approval from the relevant university administration to carry out this research, twenty-one (21) of seventy-two (72) primary schools (1–8) from the Gamo zone, South Ethiopia regional state, were selected. These primary schools were selected using random cluster sampling. Then, 242 respondent teachers were selected from a total of 840 (approximately) primary school teachers of different Gamo zone districts, as presented in Table 1.
The total sample of the survey study was 242 (28.8%) of the 840 population. From the sample respondents’ reports, the teachers’ sex, teaching experience, and educational status were aligned with the study’s variables. Accordingly, respondents were themed into three categories, as presented in Table 2.
The selection of representatives for the in-depth interviews and informal conversations was also directed as part of the sampling procedure. Based on this, using purposive sampling, fifteen (15) primary school teachers participated in in-depth interviews and five (5) informal conversations. Thus, responsibility of assessment for learning to achieve the students’ learning improvement goals was checked using the oral discussion approach.
3.2. Data collection instruments
Multiple instruments were deployed for data collection. Primary school teachers were given a structured questionnaire, had in-depth interviews, and casual conversations conducted in order to gather pertinent data for this study. Lysaght and O’Leary (2013, 2017) created a questionnaire and detailed interview questions for an assessment for learning audit at various levels of schools and environments, which were both adapted from their work. Amharic, the primary language of instruction, was used to translate the tests and administer them (grades 1–8). The following provides more specific information about the instruments.
3.2.1. Questionnaire
The questionnaire was aimed at exploring primary school teachers’ practices of assessment for learning to support student learning improvement. Participants were asked about the purposes and principles of assessment, differences between testing and assessment, the role of teachers and students in the classroom, and approaches. The questionnaire was composed of three sections. The first section comprised nine different types of items that covered the participants’ demographic information. The second section of the questionnaire included four sub-dimensions with a five-point Lickert scale agreement for evaluating the assessment of learning practice in primary schools. The third section was an open-ended questionnaire consisting of four (4) items related to teachers’ assessment practices for student learning improvement. Before actual data collection, a pilot study was conducted to enhance the reliability and validity of the data collection instruments. To check the effectiveness and make improvements to the instruments that have been translated from English to Amharic, a pilot study was conducted with 32 primary school teachers enrolled in the summer program training. Two teachers were selected and invited to comment on the clarity and appropriateness of the questionnaire survey for the study.
As a result, information about the comprehensive questionnaire is presented in Table 3.
The internal and external validity of the questionnaire were triangulated in appropriate measurements. The internal content validity of the assessment for learning practice questionnaire was checked using the Item Content Validity Index (I-CVI) for clarity, relevance, and appropriateness and was found to be 0.808, indicating that the instrument was valid in its content. Furthermore, the study’s external validity assumed that the sample of 242 teachers was representative of the 840 primary school teachers in the study area. However, because the demographic restriction of the study was in the Gamo zone only, the sample of the study area was valid; however, the study may not be generalized outside of the study area. In addition, the internal consistency reliability of the translated from English to Amharic version questionnaire was discovered to have a Cronbach’s alpha value of 0.908 for the full scale after running the data from the pilot study. The internal consistency reliability tests were computed for the assessment for learning practice subscale with Cronbach’s alpha of each of the assessment for learning practice values were found to be: (1) Learning intentions and success, 0.696, (2) Classroom environment practice, 0.761, (3) Feedback in assessment practice, 0.759, and (4) Peer and self-assessment practice, 0.906. Thus, the English version of the instrument can consistently measure what it is supposed to measure, as opposed to Lysaght and O’Leary’s (2013, 2017) internal consistency reliability, which was 0.861 in its Cronbach’s alpha value. As a result, the Amharic version of the assessment for learning practice questionnaire achieves its objective.
3.2.2. In-depth interviews and conversations
Fifteen (15) teachers from the focused primary schools were polled and interviewed regarding their level of sustainability of assessment for learning practice and challenges in primary schools. The aim was to determine an assessment practice for learning in the primary school education system and to converge the quantitative data results. The core motivational question was, “Are there any potential challenges during the assessment for learning practice?” If their answer was “Yes,” the question was followed up with questions such as “What are these? Could you clarify it? Please tell me briefly,” etc. The interview process was conducted by the researchers to triangulate the realities of teachers’ practice of assessment for learning. From this point of view, primary school teachers shared their experiences about the challenges faced when implementing assessment for learning.
Informal conversations were managed to hold with five (5) primary school teachers who participated in in-depth interviews to acquire details about the obstacles that teachers faced while going to practice assessment for learning in foremost classroom settings, as well as to clarify the numerical data results. The informal discussion centered on the challenges that teachers face when implementing learning assessments to improve student learning. According to Swain and King (2022) in the informal conversation technique of data collection there are two types: overhead or observed conversations and shared or participatory conversations. In this study the participatory conversations type was applied to triangulate data in interactive dialogue between the researchers and the participants during the field work.
After gathering qualitative data through in-depth interviews and casual conversations, the credibility of the data was assessed using the quality criteria proposed by Korstjens and Moser (2018): credibility, dependability, transferability, and conformability. Based on this, the researchers cross-checked the data results for transferability and evaluated the study’s interpretation’s credibility in relation to the participants’ original data using member check. Two participants in the qualitative data study also used audit trail measurement to assess the data’s dependence, and they confirmed that the data from the study’s target participants supported the study’s conclusions, interpretation, and recommendations.
In case of conformability of the status of the data results were checked by two experienced researchers.
3.3. Data analysis techniques
Data collection took place in the academic year 2021/22. The data collection procedure had multiple stages. First, school teachers were briefed on the study’s relevance and context before being asked for their permission to participate. Following that, the questionnaire was administered to assist participants in correctly filling it out, followed by interviews and informal conversations. For analyzing, the closed-ended survey was encrypted with in SPSS-25 version. To determine whether there were statistically significant mean differences between the expected and observed means of assessment for learning practices for learning improvement, a one-sample t-test was used. Before using descriptive and inferential statistics, the data collected were checked against some basic assumptions of the statistical instruments used to analyze the data. The distribution of the scores of the quantitative data at items and scale levels showed a normal distribution since the skewness and kurtosis values were between +1.5 and − 1.5. In addition, the scores had no significant extreme outliers that could influence the mean scores for data analysis. Moreover, Levine’s statistic test of homogeneity variance for the subscales of the assessment for learning practice showed no significant differences (df (2, 240) = 0.043, p > 0.05). Furthermore, the normality probability plots (Normal Q-Q Plots) showed straight lines that indicated normal distributions for the two assessment variables for learning practice. Thus, the descriptive and inferential statistics presented below were applied as instruments for data analysis.
Descriptive statistics were applied to analyze the quantitative data obtained through questionnaires. Specifically, standard deviation and mean scores at the item level and item aggregate mean values were employed to address assessment for learning practice and its subscales. Because the mean values alone could not distinguish whether there were statistically significant differences among the mean values of the dimensions, the analysis of variance (ANOVA) test was employed. Finally, Tukey’s HSD test analysis was run to compare the mean scores. Finally, a 5 % (α = 0.05) significance level was employed throughout the study. In addition, the interpretation of the participants on their assessment for learning practice used the following common scale indicated by Magulod (2019): 4.20–5.00 (Very High/Strongly Agree); 3.40–4.19 (High/Agree); 2.60–3.39 (Moderate/undecided); 1.80–2.59 (Low/Disagree); and 1.00–1.79 (Very Low/Strongly Disagree).
Concerning qualitative data analysis, interview transcriptions for the themes that emerged were examined regarding teachers’ assessment of learning practice. The themes related to status and changes in teachers’ assessment of learning practice were categorized and analyzed thematically through verbal descriptions. Similarly, themes related to status and changes in the interviewees’ assessment of learning practice dimensions were sorted and analyzed through verbal descriptions. Nonetheless, the open-ended questionnaire, in-depth interview, and informal conversation data were coded inductively and thematically, and then verbal communication described to lead to the ultimate research findings. The open-ended questionnaire, interview, and informal conversation data were also recorded and transcribed for thematic analysis to triangulate the teachers’ practice of assessment for learning to improve students’ learning and the challenges they faced during their practices.
3.4. Ethical consideration
The manuscript is unique, and the data reflect the actual perspectives of the teachers who took part. This manuscript has not been published in any form or language, throughout portion or in entirety, anywhere else. The outcomes are presented clearly, truthfully, and without fabrication, falsification, or improper data manipulation. There is no representation of information, text, or concepts by others as though they were the authors’ original, with appropriate acknowledgement of others’ works taken into account. Before administering the survey questionnaires, official letters from Arba Minch University and the study district were obtained and used to contact survey respondents at the study sites. The data collection for the household survey questionnaire was done in accordance with ethical considerations. The first author received ethics approval from the Arba Minch University Institutional Research Ethics Board as part of the preliminary data collection duty. In addition, prior informed consent from the study participants was obtained before collecting the data, clearly explaining that the required data were only for research purposes and would be handled confidentially.
4. Data analysis results
The quantitative data collected through the questionnaires are presented and analyzed in the first subsection of the data analysis. Next, the data gathered through interviews were presented and thematically analyzed in the second subsection of the data analysis section.
4.1. Primary school teachers’ assessment for learning practice
The questionnaire data were used to answer the first research question: Is primary school teachers’ assessment for learning practice beneficial to students’ learning improvement? To analyze and determine the participants’ assessment of learning practice, mean values and standard deviations were used. The primary school teachers’ assessment for learning practice for learning improvement analysis is provided in Table 4, as indicated below, based on the data obtained from the questionnaire.
Table 4 shows descriptive statistics of the primary school teachers’ assessment for learning practice regarding the four dimensions and scale. The mean score for classroom environment in practice (M = 3.49, SD = 53) is the highest, followed by the feedback in assessment practice dimension (M = 3.37, SD = 0.59) and learning intentions and success dimension (M = 3.31, SD = 0.50), the third one. In contrast, peer- and self-assessment practice (M = 3.29, SD = 0.75) appears to have the lowest mean value. Besides, the overall scale mean value was 3.36 with a standard deviation of 0.48. However, the mean values alone cannot determine whether there are statistically significant differences between the mean values of the four dimensions. To that end, the ANOVA test was used to determine whether there were significant differences in teachers’ ratings of the four dimensions of assessment for learning practice, as shown in Table 5.
To determine whether there were statistically significant differences between the mean scores of teachers’ assessments for learning practice dimensions, a one-way between-groups ANOVA was performed. The assessment for learning practice dimensions mean scores differed significantly (F (1, 241) = 7.252, p = 0.000), as shown in Table 5. The effect size (2) was 0.079, which has been considered small. The results show that teachers’ ratings of specific dimensions differ by 7.9% from those of other dimensions. Furthermore, these same dimensions that contributed significantly to the differences were not shown in this result. Post-hoc comparisons of the dimensions using the Tukey HSD test, for example, were computed to identify the dimensions that contributed significantly to the difference, as shown in Table 6.
The Tukey HSD test comparison of mean scores showed a statistically significant difference between the learning intentions and success mean score (M = 3.31) and classroom environment (M = 3.49), learning intentions and success mean score (M = 3.31) and feedback in assessment practice (M = 3.37), and learning intentions and success mean score (M = 3.31) and peer- and self-assessment practice (M = 3.29), p < 0.05. Similarly, there was a statistically significant mean score difference between the classroom environment (M = 3.49) and feedback process in the assessment practice (M = 3.37). In addition, there was a statistically significant difference between the feedback in assessment practice (M = 3.37) and peer- and self-assessment practice (M = 3.29) dimensions, at p < 0.01. Thus, the classroom environment (M = 3.49) dimension of assessment for learning practice had the highest impact. In contrast, the feedback in the assessment practice (M = 3.37) dimension had equal weight placed second, learning intentions and success mean score (M = 3.31) and peer- and self-assessment practice (M = 3.29) dimensions had weight placed third and fourth, respectively. Nevertheless, according to Magulod (2019), the classroom environment (M = 3.49) scale confirmed high teachers’ confidence in their assessment for learning practice, and the three dimensions and the scale fall in a moderate assessment for learning practice category. Therefore, the results showed that teachers had moderate confidence in their assessment of learning practice in the three dimensions.
4.2. Challenges in assessment for learning practice
Primary school teachers were found to be susceptible to diverse types of assessment for learning practice obstacles. The data from the open-ended questionnaire, interview, and informal conversation show that there is controversy about the teachers’ promoted practice of assessment for learning and its sustainability in the students’ learning improvement. In addition, as of the data of these instruments analyzed, participant primary school teachers reported that the challenges they faced during the assessment for learning practice and hindered the sustainability of students’ learning improvement are thematically categorized as follows (Table 7).
The practice of assessment for learning for students’ learning progress was engaged with the many challenges across the study areas. Assessment for learning stands for learning improvement (Kangaslampi et al., 2022), with the nomenclature conceived in accordance with the existing context across the study issues. This context demanded an examination of the difficulties encountered by primary school teachers in the areas of focus. Hence, the three data instruments seek information qualitatively to analyze the assessment for learning practices that the primary school teachers facing challenges are summarized and listed as of the data gathered. In sum, it appears reasonable that primary school teachers faced many challenges during the practice of assessment for learning.
5. Discussion and conclusion
5.1. Discussion
The data collected from primary school teachers through questionnaires, interviews, and informal conversations confirmed the participants’ familiarity with assessments for learning in primary school classrooms. The data also revealed teachers’ difficulties with implementing learning assessments. In terms of teachers’ assessments for learning practices, data from the questionnaire appear to contradict information gleaned from informal conversations. Regarding the same assessment for learning practice in primary school classrooms, the data from the initial research question examined in quantitative and qualitative forms of statistical data contradicted each other. This demonstrates a disagreement between the quantitative and qualitative data results.
The study’s first objective was to explore primary school teachers’ assessment of learning practice. The result from the quantitative data shows that primary school teachers are familiar with “learning assessment.” On the contrary, the data revealed from the qualitative version proved that teachers have less practice in assessment for learning. The informal conversation also proved that teachers practiced less in assessment for learning in their teaching journey. This result was consistent with previous research (Hailay, 2017; Gedamu and Shewangezaw, 2020; Hussain et al., 2021; Gebremariam and Gedamu, 2022; Yan and Carless, 2022) proved that teachers practiced less in different contexts. However, according to the qualitative data, the participating teachers were confused with or disliked using assessment for learning for learning improvement.
The second objective of this study was to investigate the challenges in assessing learning practice. According to the qualitative portion of the data results, the participating teachers were unfamiliar with the practical assessment skills for learning in primary school classrooms due to a variety of challenges, including a lack of transparency and experience, as well as preference and school-related issues. The oral description results revealed that the participating teachers were less aware of the assessment for learning practice. This finding backs up previous research findings (Black and William, 1998; Pattalitan, 2016; Sultana, 2019; Boud and Dawson, 2021). The findings of qualitative and quantitative data agree with and confirm previous works and observations on assessment for learning literacy, classroom practice, and assessment practice preferences by Black and Wiliam (2006), Lysaght et al. (2017), Sintayehu (2016), Sultana (2019), and Teshome (2016). The current study’s findings alien with those of Gebremariam and Gedamu (2022) and Monteiro et al. (2021), who also revealed the extent of primary school teachers’ assessment for learning practices, although, they did not define assessment for and of learning concepts separately. In addition, Hailay (2017) and Kangaslampi et al. (2022) discovered that both teachers and students preferred their old traditions, beliefs, and preferences and those they collaborated to create a richer pedagogical context. The results of the oral description for the second research question show that primary school teachers make less learning progress through assessment of learning practice due to a variety of challenges that teachers face, including transparency, experience and training, school issues, and preference-related problems. This suggests that many teachers are unsure and lack experience in improving students’ learning progress through assessment for learning practice, as described in previous studies (Sintayehu, 2016; Carless and Boud, 2018; Sultana, 2019; Gedamu and Shewangezaw, 2020; Boud and Dawson, 2021; Mohamed et al., 2021; Kangaslampi et al., 2022).
Regarding assessment practice, teachers are apprehensive about using assessments in classes for academic achievement. This supports the findings of Teshome (2016) and Sintayehu (2016), which discovered that many teachers lack the skill and knowledge of assessment for learning practice in classrooms, preferring to begin practicing the same traditional method instead. Furthermore, Brown and Abeywickrama (2010), Maclellan (2017), Popham (2017), and Sultana (2019) concluded that teachers are inadequately trained in classroom implementation assessment. Teachers’ literacy in assessing learning practices is limited, according to Sultana (2019). However, the new challenges encountered in this study are a lack of transparency regarding assessment practice standards in the learning context, as well as school administrative issues (Schildkamba et al., 2020). As a result, these challenges have overlapping effects. When there is an instructional supervision problem with guidance documents and the fundamentals of using assessment for learning inside classrooms, there is a lack of transparency. Furthermore, if there is a lack of transparency, the school culture may be poorly organized or situated to use assessment for learning in primary schools.
5.2. Conclusion
This study attempted to investigate assessment for learning, specifically the assessment for learning practices used by primary school teachers, as well as the current challenges that investigate the sustainability of students’ learning improvement. Primary school teachers face a variety of challenges in their efforts to improve learning. Assessment of learning in primary school classrooms is partially mandated. Because the modularization curriculum is designed to use individual learning, cooperative learning, and learner-centered practices, primary school teachers did not involve their students in assessment practices. Teachers may ask students to collaborate with their classmates on occasion. According to the qualitative findings of this study, primary school teachers were aware of learning assessment literally, but assessment for learning practice as it encourages students to be engaged in their learning progress, its pragmatism is quite low in comparison to the curriculum goals expected, and its practices depend on some different cases. These findings show that primary school teachers’ assessment of learning practice is minimal. This, in turn, suggests that Ethiopia’s current assessment for learning practice for primary school teachers needs to be revised because it misses the primary goal of learning improvement, which is autonomous lifelong learning through self-reflection on practice.
Finally, the findings are shown to draw the following recommendations:
Due to primary school teachers’ unfamiliarity with assessment for learning, the ministry of education, colleges and universities, and instructional bureaus in the South Ethiopia region should organize ongoing professional development opportunities, such as on-the-job assessment training. College and university community outreach office buildings should assess problems using this type of research and arrange teacher training on assessment theories and practices to improve student learning. Professional development in areas such as assessment would help teachers understand the dynamics and nature of assessment practices. Although the sampling techniques of the study area were valid, the demographic restriction of the study was may not be generalized the results, therefore, future research could use a more extensive study area and sample size of participants to reduce the risk of reporting false-negative or factually inaccurate findings, resulting in more accurate and representative results. Finally, the impact of the respondents’ diverse backgrounds must be assessed for a representative comparison.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The research design and data collection process were approved by two reviewers assigned by the College of Social Science and Humanities Research Coordination office at Arba Minch University. All participants were informed that they could voluntarily participate in the study and that the results would be used for educational purposes.
Author contributions
HG and AG contributed to the article writing, data collecting, and data analysis.
Acknowledgments
The authors would like to thank Arba Minch University’s Research Directorate for funding this research project through the College of Social Science and Humanities research coordination office, with the project code: GOV/AMU/TH23/CSSH/ELL-AM/03/13 for data collection and statistical analysis payment.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Birenbaum, M., DeLuca, C., Earl, L., Heritage, M., Klenowski, V., Looney, A., et al. (2015). International trends in the implementation of assessment for learning: implications for policy and practice. Policy Fut. Educ. 13, 117–140. doi: 10.1177/1478210314566733
Black, P., and Wiliam, D. (2006). “Assessment for learning in the classroom” in Assessment and learning. ed. J. R. Gardner (Thousand Oaks, CA: Sage), 9–26.
Black, P., and Wiliam, D. (2009). Developing the theory of formative assessment. Educ. Assess. Eval. Account. 21, 5–31. doi: 10.1007/s11092-008-9068-5
Black, P., and William, D. (1998). Assessment and classroom learning. Assess. Educ. Princ. Policy Pract. 5, 7–74. doi: 10.1080/0969595980050102
Bordoh, A., Eshun, I., Quarshie, M., Bassaw, T. K., and Kwarteng, P. (2015). Social studies teachers’ Awareness Base in authentic assessment in selected senior high schools in the central region of Ghana. J. Soc. Sci. Humanit. 1, 249–257.
Boud, D., and Dawson, P. (2021). What feedback literate teachers do: an empirically-derived competency framework. Assess. Eval. High. Educ. 48, 158–171. doi: 10.1080/02602938.2021.1910928
Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G., and Molloy, E. (2018). Reframing assessment research: through a practice perspective. Stud. High. Educ. 43, 1107–1118. doi: 10.1080/03075079.2016.1202913
Brown, H. D., and Abeywickrama, P. (2010). Language assessment principles and classroom practices. (2nd ed.), New York: Pearson Education, Inc.
Brown, G. T. L., Gebril, A., and Michaelides, M. P. (2019). Teachers’ conceptions of assessment: a global phenomenon or a global localism. Front. Educ. 4:16. doi: 10.3389/feduc.2019.00016
Capan, M. M., Lettner, S., Bäwert, A., Puttinger, C., and Holzinger, A. (2020). Pursue today and assess tomorrow-how students’ subjective perceptions influence their preference for self- and peer assessments. BMC Med. Educ. 20:479. doi: 10.1186/s12909-020-02383-z
Carless, D., and Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assess. Eval. High. Educ. 43, 1315–1325. doi: 10.1080/02602938.2018.1463354
Crisp, G. T. (2012). Integrative assessment: reframing assessment practice for current and future learning. Assess. Eval. High. Educ. 37, 33–43. doi: 10.1080/02602938.2010.494234
Desalegn, C. (2014). Practices of assessing graduate students’ learning outcomes in selected Ethiopian higher education institutions. J. Int. Coop. Educ. 16, 157–180. doi: 10.15027/36172
Earl, L. (2003). Assessment as learning: Using classroom assessment to maximize student learning. Thousand Oakes, Corwin Press.
Farrel, A. (2017). Assessment for life-long learning, academic practice, Dublin: University of Dublin.
Gebremariam, H. T., and Gedamu, A. D. (2022). Assessment for learning strategies: Amharic language teachers’ practice and challenges in Ethiopia. Int. J. Lang. Educ. 6, 128–140. doi: 10.26858/ijole.v6i2.20505
Gedamu, A. D., and Shewangezaw, G. L. (2020). Secondary school teachers’ and students’ perspectives on cooperative group work assessment challenges in Ethiopia. Afr. J. Teach. Educ. 9, 104–119. doi: 10.21083/ajote.v9i0.6083
Hailay, T. (2017). Investigating the practices of assessment methods in Amharic language writing skill context: the case of selected higher education in Ethiopia. Educ. Res. Rev. 12, 488–493. doi: 10.5897/ERR2017.3169
Hargreaves, E. (2005). Assessment for learning? Thinking outside the (Back) box. Camb. J. Educ. 35, 213–224. doi: 10.1080/03057640500146880
Hussain, S., Idris, M., and Akhtar, Z. (2021). Perceptions of teacher educators and prospective teachers on the assessment literacy and practices. Gomal Univ. J. Res. 37, 71–83. doi: 10.51380/gujr-37-01-07
Imran, S. (2012). Authentic assessment. Available at: http://ipankreview.wordpress.com/ (Accessed July 2015).
Kangaslampi, R., Asikainen, H., and Virtanen, V. (2022). Students’ perceptions of self-assessment and their approaches to learning in university mathematics. LUMAT 10, 1–22. doi: 10.31129/LUMAT.10.1.1604
Korstjens, I., and Moser, A. (2018). Series: practical guidance to qualitative research. Part 4: trustworthiness and publishing. Eur. J. Gen. Pract. 24, 120–124. doi: 10.1080/13814788.2017.1375092
Lysaght, Z., and O’leary, M. (2013). An instrument to audit teachers’ use of assessment for learning. Irish Educ. Stud. 32, 217–232. doi: 10.1080/03323315.2013.784636
Lysaght, Z., and O’leary, M. (2017). Scaling up, write small: using an assessment for learning audit instrument to stimulate site-based professional development, one school at a time. Assessment Educ. Princ. Policy Pract. 24, 271–289. doi: 10.1080/0969594X.2017.1296813
Lysaght, Z., O’Leary*, M., and Ludlow, L. (2017). Measuring teachers’ assessment for learning (AfL) classroom practices in elementary schools. Int. J. Educ. Methodol. 3, 103–115. doi: 10.12973/ijem.3.2.103
Maclellan, E. (2017). How convincing is alternative assessment for use in higher education? Assess. Eval. High. Educ. 29, 311–321. doi: 10.1080/0260293042000188267
Magulod, G. C. (2019). Learning styles, study habits, and academic performance of Filipino university students in applied science courses: implications for instruction. J. Technol. Sci. Educ. 9, 184–198. doi: 10.3926/jotse.504
Mohamed, M., Abd Aziz, M. S., and Ismail, K. (2021). “Assessment for learning” practices amongst the primary school English language teachers: a mixed methods approach. JSSH 29, 1875–1900. doi: 10.47836/pjssh.29.3.21
Molloy, E., Boud, D., and Henderson, M. (2020). Developing a learning-centered framework for feedback literacy. Assess. Eval. High. Educ. 45, 527–540. doi: 10.1080/02602938.2019.1667955
Monteiro, V., Mata, L., and Santos, N. N. (2021). Assessment conceptions and practices: perspectives of primary school teachers and students. Front. Educ. 6:631185. doi: 10.3389/feduc.2021.631185
Mussawy, S. A. J., Rossman, G., and Haqiqat, S. A. Q. (2021). Students’ and teachers’ perceptions and experiences of classroom assessment: a case study of a public university in Afghanistan. High. Learn. Res. Commun. 11, 22–39. doi: 10.18870/hlrc.v11i2.1244
Nieminen, J. H., Bearman, M., and Tai, J. (2023). How is theory used in assessment and feedback research? A critical review. Assess. Eval. High. Educ. 48, 77–94. doi: 10.1080/02602938.2022.2047154
Pattalitan, A. P. Jr. (2016). The implications of learning theories to assessment and instructional scaffolding techniques. Am. J. Educ. Res. 4, 695–700. doi: 10.12691/education-4-9-9
Popham, W. J. (2017). Classroom assessment: what teachers need to know. (2nd ed.) Boston, MA: Pearson Education, Inc.
Rose, S., Spinks, N., and Canhoto, A. I. (2015). “Formula for determining sample size” in Management research: Applying the principles (Abingdon, UK: Routledge, Taylor and Francis)
Schildkamba, K., van der Kleij, F. M., Heitink, M. C., Kippers, W. B., and Veldkamp, B. P. (2020). Formative assessment: a systematic review of critical teacher prerequisites for classroom practice. Int. J. Educ. Res. 103:101602. doi: 10.1016/j.ijer.2020.101602
Serin, G. (2015). Alternative assessment practices of a classroom teacher: alignment with reform-based science curriculum. Eurasia J. Math. Sci. Technol. Educ. 11, 277–297. doi: 10.12973/eurasia.2015.1330a
Sintayehu, B. (2016). The practice of continuous assessment in primary schools: the case of Chagni, Ethiopia. J. Educ. Pract. 7, 24–30.
Subheesh, N. P., and Sethy, S. S. (2020). Learning through assessment and feedback practices: a critical review of engineering education settings. EURASIA 16:em1829. doi: 10.29333/ejmste/114157
Sultana, N. (2019). Language assessment literacy: an uncharted area for the English language teachers in Bangladesh. Lang. Test. 9, 1–14. doi: 10.1186/s40468-019-0077-8
Swain, J., and King, B. (2022). Using informal conversations in qualitative research. Int J Qual Methods 21, 160940692210850–160940692210810. doi: 10.1177/16094069221085056
Teshome, N. (2016). The practice of student assessment: the case of College of Natural Science, Addis Ababa university, Ethiopia. J. Educ. Pract. 6, 141–150.
Winarso, W. (2018). Authentic assessment for academic performance: study on the attitudes, skills, and knowledge of grade 8 mathematics students. Malikussaleh J. Math. Learn. 1, 1–8. doi: 10.29103/mjml.v1i1.579
Yan, Z., and Carless, D. (2022). Self-assessment is about more than self: the enabling role of feedback literacy. Assess. Eval. High. Educ. 47, 1116–1128. doi: 10.1080/02602938.2021.2001431
Keywords: assessment for learning framework, assessment for learning practice, learning improvement, primary school teachers, students’ learning progress
Citation: Gebremariam HT and Gedamu AD (2023) Primary school teachers’ assessment for learning practice for students’ learning improvement. Front. Educ. 8:1145195. doi: 10.3389/feduc.2023.1145195
Edited by:
Stephen Woodcock, University of Technology Sydney, AustraliaReviewed by:
Saiful Akmal, Universitas Islam Negeri Ar-Raniry, IndonesiaFred Kofi Boateng, University of Ghana, Ghana
Copyright © 2023 Gebremariam and Gedamu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Hailay Tesfay Gebremariam, aGFpbGF5MzNAZ21haWwuY29t