- Department of Educational Sciences Specializing in Primary and Secondary School and Special Needs Education, Faculty of Education, Kristianstad University, Kristianstad, Sweden
Assessments have been shown to influence students’ learning and motivation. To avoid negative consequences, different strategies have been proposed, such as making a more distinct separation between assessments for summative and formative purposes. In this way, situations are created that are exclusively formative, where students may focus on learning without worrying about test scores or grades. This study has investigated perceptions of such a context, where grading is kept apart from assessment for formative purposes. Semi-structured interviews have been performed with 19 participants at five so called “adult education colleges.” At these colleges, students’ “grades” are determined as a joint decision by all the teachers together at the end of the academic year, and no grades are communicated to the participants beforehand. Data from the interviews was analyzed with qualitative thematic analysis, identifying four themes relating to participants’ perceptions of assessment. Findings suggest that participants perceive that there is a lack of feedback on overall progress, limiting their possibilities to regulate their learning. Findings also suggest that the participants do not always know when, or on what grounds, they are being summatively assessed, leading to less productive study strategies. The consequences of this particular assessment context thereby seem similar in several respects, as compared to those reported from the ordinary Swedish school system, even though the latter is greatly influenced by numerous summative assessment events.
Introduction
Assessments can have a strong influence on student learning and motivation, in ways that are both positive and negative. For example, “formative assessment” has gained significant attention due to the claimed support for improving of student learning (e.g., Black and Wiliam, 1998; Carless, 2016), while assessments for summative purposes (or “summative assessments”), such as grading or final exams, have gained most attention in relation to potential negative effects (e.g., Harlen and Deakin Crick, 2003; Koenka et al., 2019). Despite these differences in attention to positive and negative effects, in reality there are (of course) no guarantees for positive effects when introducing formative assessment practices (e.g., Dunn and Mulvenon, 2009; Bennett, 2011; Levinsson et al., 2013), and the motivation provided by summative assessments is likely have positive effects for at least some students.
Even if summative assessments may have positive consequences for some students, there are undoubtably negative effects as well. Different strategies have therefore been proposed to avoid these negative consequences, such as abandoning the use of grades or sharing grading criteria with students. There have also been suggestions to make a more distinct separation between assessments for summative and formative purposes (see e.g., Harlen and James, 1997), for instance by creating situations or periods that are exclusively formative, so that students may focus on learning without worrying about whether any mistakes or asking for help may negatively influence their grades.
This study aims to investigate the perceptions of assessment in a context, where such a distinction is made between assessments for summative and formative purposes, by keeping grading apart from assessments for formative purposes. Participants1 at so called “adult education colleges,” where no grades are communicated to the participants during the academic year, has therefore been interviewed about their perceptions of assessment.
Background
Consequences of Summative Assessments
The fact that summative assessments may affect student learning and motivation, is well documented (e.g., Harlen and Deakin Crick, 2003; Koenka et al., 2019). It has also been shown that the nature and severity of these consequences may differ, depending on, for instance, the previous achievement level of the students (Koenka et al., 2019). As an example, in a review of research on grading, Anderson (2018) turns attention to an early, quasi-longitudinal study by Kifer from 1975, involving students (n = 214) at four grade levels (2, 4, 6, and 8). At each level, students who had been in the top or bottom 20% of their class each year, respectively, were assigned to different experimental groups and administered an academic self-concept (ASC) scale. For the second-grade students, the ASC scores of the two groups did not differ significantly. However, by the eighth grade, the differences between the two groups were both substantial and statistically significant. Furthermore, while the mean ASC scores of students in the top 20% did not change much from grade to grade, for students in the bottom 20% there was a steady decline (Kifer, 1975).
Similar findings have been reported more recently by Klapp (2015), who used data from more than 8,500 students to compare grades from secondary-school students who were previously graded in primary school, with grades from students who were not. Results showed that low-achieving students who were previously graded received lower subsequent grades, as compared to “ungraded” students. At first, there was a weak positive effect of grading for high-ability students, but this effect grew weaker over time and was later negligible. As proposed by Klapp (2017), the greater negative impact on the low-achieving students can be explained by the loss of resources, such as academic self-confidence, for these students, which the students need to “keep, gain, and develop a sense of self-worth and positive self-confidence in order to believe that they can manage, learn, and achieve in school” (p. 371).
From studies like these, it appears that a fundamental problem with grades is that they constantly remind low-achieving students about their incapacity to meet the expectations placed upon them. As a result, their academic self-confidence tends to deteriorate over time, and at some point the grades may also begin to apply to the students, so that they view themselves as “D” or “F” students (Anderson, 2018).
It has been suggested that this effect of grades may to some extent be amended by sharing explicit criteria with the students. For example, systematic reviews of research on rubrics seem to suggest that the transparency provided by the use of such instruments may, among other things, support student self-regulation (Jonsson and Svingby, 2007; Panadero and Jonsson, 2013). This means that students can use the rubrics to plan, monitor, and evaluate their task performance, thereby helping them not only to improve their performance, but to see that the assessment of their work is based on (more or less) predictable standards, not on chance, the teacher’s discretion, or personal attributes (e.g., Panadero et al., 2016). Ideally, the students should not identify themselves with the grades they receive, or see them as fixed, but perceive them as possible to improve.
There are some potential problems with this assertion, however, such as a recent meta-analysis on the effects of self-assessment interventions on self-regulation strategies, showing that students using rubrics (or similar instruments), reported lower self-efficacy after the intervention than participants not using them (Panadero et al., 2017). According to the authors, this could be an effect of low-achieving students becoming aware of the complexity of high-quality performance and therefore reporting lower self-efficacy. Another problem is that the studies on the use of rubrics typically involve “assessment criteria,” not “grading criteria.” While the term “criterion-referenced assessment” refers to judging or estimating the quality of student performance on individual tasks according to criteria, “grading” means making a decision about students’ overall attainment based on accumulated data throughout a semester, course, or other period of time. Grading criteria are therefore likely to be more overarching and abstract as compared to assessment criteria, which is likely to make them more difficult to understand and use by students.
Besides contributing to the decrease in academic self-confidence among low-achieving students, another problem with grades is that the presence of a grade may detract students’ attention away from more detailed and formative feedback (e.g., Butler, 1988; Koenka et al., 2019). As noted by Lipnevich and Smith (2009), although documented in a number of studies, the explanations for the negative effects of grades may differ, involving for instance different aspects of motivation. In their own study, where students’ perceptions of different feedback conditions were explored, low grades elicited negative affect and had adverse consequences for students’ sense of self-efficacy. High grades, on the other hand, were perceived to decrease the motivation to improve.
The negative effects of grades among low-achieving students have led some to suggest that the use of grades should be abandoned altogether (e.g., Kohn, 2011), although this is probably farfetched and may also have other negative consequences. Others have therefore argued for making a clear distinction between assessments for summative and formative purposes, while at the same time trying to find productive ways for these different assessment purposes to coexist. Harlen and James (1997), for instance, suggest making use of the same “evidence” for both purposes. In this way, data on student performance, collected as part of teaching for formative purposes, could later be reviewed for summative purposes, but in relation to criteria that are common for all students. The use of portfolios has been argued to be especially well suited for such assessment practices, since work collected in the portfolio can be used to provide feedback to the students in relation to ongoing work, as well as being used later in assessing overall attainment (e.g., Harlen, 2005; Lauvås and Jönsson, 2019). Assessments for summative and formative purposes are thereby kept apart, providing a space where students may focus on learning without worrying about assessment for summative purposes.
Assessments for purely formative purposes, without the presence of grades, is sometimes called “formative-only” (Gibbs, 2010). The use of such “formative-only” assessment situations (or more extended “formative-only” periods), have recently been advocated as a means to help students focus more clearly on learning, by relieving them from the pressure of wondering whether their efforts will be graded or not (e.g., Lauvås and Jönsson, 2019). The latter is a growing concern (at least in Sweden where this study is situated), since studies where students claim to feel constantly monitored by their teachers are accumulating. This means that there is very limited room for learning, or for making mistakes, without students worrying about how it might affect their grades. As expressed by the students in a study by Hirsh (2020), the grades are based on “everything the teachers see” (p. 98), resulting in students refraining from asking for help, in fear of negative consequences.
Even if “formative-only” situations could be deemed desirable according to the abovementioned research, it may still be difficult to find such situations in ordinary schools, as there is often a drive toward providing students with grades on their assignments (e.g., Löfgren et al., 2021). This is not seldom reinforced by the students themselves, particularly if they do not receive any proper feedback as a substitute when grades are removed (Smith and Gorard, 2005). However, in the Nordic countries, there is an alternative school form called “independent adult education colleges” (or “Folkhögskola” in Swedish), which exists in parallel to the main school system. In this school form, summative assessments are based on a holistic and collective judgment of the students’ study ability, rather than on performance in individual subjects. Students’ grades are therefore decided at the end of the academic year, as a joint decision by all teachers, which means that individual assignments cannot be graded according to the grading criteria. Independent adult education colleges thereby offer a naturally occurring context, where “formative-only” assessment situations should be the norm, and where the negative influence of assessments for summative purposes could be expected to be less pronounced.
Summative Assessment in the Swedish School System
In 2011, new curricula for compulsory and upper secondary school were introduced in Sweden as part of a major reform package for schools. In addition to new curricula, the reforms included an extended grading scale, an expanded national testing program, and the provision of formal grades from an earlier age. Furthermore, the new grading scale was accompanied by extensive descriptions of performance standards (so called “knowledge requirements”) in all subjects. As an example, the requirements for attaining grade E (i.e., the lowest passing grade) in Biology for year 9 in compulsory school consist of no less than 324 words. In addition, there is a specification of “central content” in the same subject, consisting of a similar number of words.
The reforms have greatly increased the focus on assessments for summative purposes in the Swedish school system. For example, the Swedish National Agency for Education (2015) performed a longitudinal study, in which teachers at nine compulsory schools were studied through observations, interviews, and questionnaires. The study shows that the increased weight attached to grading criteria have limited teachers’ professional freedom when planning and teaching, as compared to the time before the reform package. Very similar findings are presented by Wahlström and Sundberg (2015), who report on findings from a questionnaire answered by 1,887 teachers. There are also studies suggesting that the introduction of grades from an earlier age is associated with increased school-related stress and reduced academic self-esteem among students, leading to an increase in psychosomatic symptoms and decreased life satisfaction (Högberg et al., 2021).
Interview studies with students in various stages of the school system (e.g., Sivenbring, 2016; Pérez Prieto and Löfgren, 2017; Vogt, 2017; Hirsh, 2020; Löfgren et al., 2021; Nygren, 2021) also paint a quite cohesive picture of assessments being constantly present in the minds of Swedish students, both within and outside the classroom. For example, although the guidelines from the Swedish National Agency for Education (2018) emphasize an integrative and holistic approach to grading, the digital educational platforms used by most schools, present portions of the grading criteria as separate items in a matrix format. Since the educational platforms can be accessed by the students and their legal guardians through smartphones and other devices, it has become common practice to communicate assessment and progress information via these matrices. The students may therefore keep track of their progress on a day-to-day basis (Löfgren et al., 2021). As a consequence, students invest a lot of effort to decipher the grading criteria, so that they may understand what is expected of them. Students also claim to feel constantly monitored by their teachers, which, as mentioned above, leaves little room for asking questions or for making mistakes, without worrying about how it might affect the grades.
To handle the panoptical situation described above, where the students are constantly assessed, students try to improve or safeguard their grades by modifying their behavior and influencing their social relationships with the teachers. Although it might seem more productive for students to allocate all resources to academic learning, the students do not seem to think that the expectations are clear enough (and/or do not trust their teachers enough) to rely exclusively on academic achievements, and consequently invest in social strategies as an important part of their study strategies. For example, students in a study by Löfgren and Löfgren (2016), evaluating students’ experiences of being graded in year 6 (age 12–13), describe the importance of being attentive to what the teacher wants in order to receive higher grades, including being well-behaved and doing what you are told to. Similarly, students in year 9 (age 15–16) claim that their grades do not (at least not exclusively) reflect the quality of their performance, but also individual teachers’ interpretations of the “knowledge requirements.” Furthermore, according to the students the grades are used as rewards for effort and desirable behavior (Vogt, 2017). Sivenbring (2016), who also interviewed students in year 9 about assessment and grading, writes that:
The strategic work to receive higher grades appears as investments the students do in order to make a good impression. Explicit resistance to assessment is not present in their narratives. The fact that assessment and grading is imperative means that resistance is always counter-productive. Through assessment the students become dependent upon their teachers (p. 223, translated from Swedish).
Taken together, the increased focus on summative assessments in the Swedish school system have several important consequences for students’ learning and wellbeing, which may be especially pronounced for low-performing students. As indicated by research, some of the most prevalent consequences may be lowering the achievement of low-ability students, by constantly reminding them of their failures and decreasing their academic self-concept, increased school-related stress, and promoting less productive study strategies.
Materials and Methods
Context: Assessment in Nordic Independent Adult Education Colleges
The first Nordic independent adult education colleges in Sweden were established in the second half of the nineteenth century, inspired by Danish predecessors. The main aim for this school form was to provide a kind of higher education for those who otherwise lacked access to formal higher-education institutions, such as the universities – hence the name “Folkhögskola,” which translates literally to “people” (or “folk”) university. The term “independent” comes from the fact that these colleges are not part of the national school system. Instead, they are mostly non-profit organizations.
Today, there are more than 150 independent adult education colleges in Sweden, which provide a number of different courses depending on the specific focus of the individual college, such as music, creative writing, art, or handicraft. However, they also provide “general courses” that are equivalents of courses in upper-secondary school, such as mathematics, English-as-a-foreign-language, and science subjects. These general courses provide the participants with the necessary qualifications for applying to higher education. However, the grading system in the independent adult education colleges is radically different as compared to upper-secondary school and other forms of adult education, and some of these differences are relevant for this study.
First, in the adult education colleges there is a four-level grading scale, where the participants receive a holistic “grade” based on their participation in all the courses they have taken. This “grade” is determined as a joint decision by all the teachers together at the end of the academic year, which means that individual assignments cannot be graded.
Second, the criteria for this holistic “grade” do not only include subject knowledge and skills, but also “Capabilities for analysis, processing, and overview,” “Ambition, perseverance, and capability to organize studies,” and “Social skills.”
Third, although there are explicit grading criteria, the grades are also norm-referenced to enable national comparability. The four-level grading scale is therefore converted to numbers (1–4), where the average for each individual college during an academic year must be in the range of 2.7 ± 5 percent (Swedish National Council of Adult Education, 2017). Adjusting the grades according to this range is also done at the end of the academic year, when all preliminary “grades” have been assigned, which means that no numbers can be communicated to the participants beforehand.
Since individual assignments cannot be graded, and no information about the overall “grade” can be communicated to the participants until the end of the academic year, grading is kept apart from assessments for formative purposes at these colleges. It could therefore be assumed that the negative consequences of assessments for summative purposes would be less pronounced and that participants at adult education colleges would perceive the assessment regime differently, as compared to students’ perceptions of the Swedish school system. The purpose of this study is to investigate the perceptions of this particular assessment context, by collecting interview data among participants at adult education colleges.
Sample and Interviews
This is an interview study with semi-structured interviews and qualitative thematic analysis. The sample consists of 19 participants enrolled at five different adult education colleges (13 females and six males), within the “general courses” (i.e., equivalents of courses in upper-secondary school). The participants were recruited by contacting college principals, who forwarded the invitation to their participants. Participants volunteering to take part in the study (i.e., a convenience sample) were then contacted by email. All colleges contacted were located in the same geographical region, and also cooperate in matters such as teacher professional development, which was thought to increase the likelihood of having similar assessment practices, making the interviews comparable across colleges.
All respondents were informed about the purpose of the study and that their participation was voluntary and anonymous. Anonymity was secured by not collecting any personal information about the participants. Furthermore, only recorded sound was used during the analysis (i.e., not video). Consent was given orally and documented on digital video.
All of the interviews were semi-structured and followed a common interview protocol with nine main questions, such as “What do you think are the largest differences in terms of assessment between adult education colleges and upper-secondary school?” and “The criteria for ‘grades’ in adult education colleges are broader than the ‘knowledge requirements’ in upper-secondary school, by including study ability and social skills. What are your thoughts on this?” In a few cases, the interviews were performed with 2–3 participants simultaneously.
The interviews were carried out entirely online and were audio recorded. On average, the interviews lasted for approximately 22 mins (5 h and 33 min in total). In line with current ethical guidelines, no data apart from what was needed to serve the purpose of the study has been collected. As a consequence, no personal information about the participants has been collected, only their perceptions of the assessment context.
Analysis
The interviews were analyzed with conventional thematic analysis, which is a method for identifying, analyzing, and interpreting patterns of meaning (or “themes”) within qualitative data (Clarke and Braun, 2017). The analysis was mainly inductive in nature and followed the procedure outlined by Braun and Clarke (2006), which means that the following steps were taken: The first step was to listen to the audio data and create time logs in spreadsheets, so that the different parts of the interviews could be searched and organized. Second, interesting features of the data, where respondents described how they perceived individual assessment situations or the assessment context at large, were marked across the data set. For example, the following statements were coded as relating to the perceived lack of information about overall progression: “In upper-secondary school you can follow your progression [while in adult education colleges] the final grade may come as a shock” (respondent #1); “You do not know at which level you perform. We have no such discussions with the teachers. With grades [in school], you can see what you need to do” (respondent #3). Third, respondents’ statements were searched for themes common to most of the interviews, and/or strongly emphasized as significant by several respondents. As an example, one of these themes, which includes the abovementioned statements, focus on the perception of being “blindfolded” by the lack of information about overall progress, making it difficult for the participants to regulate their learning. All data relevant to the potential themes was then collated and checked in relation to the entire dataset. Fourth, descriptions of the themes were made, and compelling extract examples were selected. Fifth, a final analysis of the selected extracts, relating back to the analysis of the research questions and literature, was made. The descriptions of the themes, as well as the extracts, were then translated into English by the researcher.
Preliminary findings were shared with the participants of the study, so that they could comment on the interpretations of the data. However, only one participant chose to provide such input.
Results
According to most participants, adult education colleges are generally a better place for learning as compared to upper-secondary school, which is the school form that the majority of participants most recently attended. While the upper-secondary school is described in terms of factories, where you move from test to test along a metaphorical assembly line, saving nothing in the long-term memory, the adult education colleges are mostly described in terms of personal development and deep-processing. As an example, participant #13 mentions reflection assignments, group discussions, different perspectives, and less result-oriented teaching. She claims to have learnt more in one semester than during all her previous years in school. Participant #14 characterizes adult education colleges as providing an individually tailored education in a “forgiving environment.”
Despite the more favorable conditions for learning at the adult education colleges, as perceived by the participants, they are not unequivocally positive about the assessment practices. On the contrary, most participants feel quite ambiguous about how they are assessed. This ambiguity concerns four overarching themes, which are described below, along with selected extracts from the interviews.
Lacking Support for the Regulation of Learning
The most prevalent and most strongly emphasized theme in the interviews concerns the fact that individual assignments are not graded. The participants therefore perceive that they do not receive sufficient information about their progress. Furthermore, according to most of the participants, the staff at these colleges have decided not to share any information about overall progression beforehand:
The teachers are not allowed to reveal the grade in advance. (#14)
The teachers are clear about not telling or giving any predictions for the grade. (#16)
On one hand, several participants express their deprecation with the extreme focus on test results and grades in Swedish upper-secondary school, as well as the instrumentalist approaches to learning that follow from this, and they claim to accept that some degree of uncertainty is needed in order to develop and grow. On the other hand, however, they seem to think that the adult education colleges may have wandered too far in the opposite direction. As a consequence of neither receiving grades on individual assignments, nor any other information about their overall progression, the students have difficulties keeping track of their progress and regulating their learning. One of the participants (#1) compares the situation to tightrope walking, saying that although she is able to maintain her balance, it feels like she is blindfolded and do not know where she is heading. She also describes the response from a teacher, when asking for feedback on her progress:
I can’t say anything about your progress, because then it would be me telling you what to do./…/Then it wouldn’t be you making progress. (#1)
Besides not being able to regulate their learning, the lack of information about their progress has several consequences for the participants, most commonly expressed as an increased level of constant stress and confusion. For example, one participant (#4) describes a steady state of anxiety due to the vague nature of the grading system, as she does not know what is expected of her. Again, the participants seem to understand and acknowledge the idea behind the grading system, but oppose to how it is enacted in practice:
It’s supposed to be inspiring, but instead becomes a stress factor, since, on the one hand, performance plays a big role, but on the other hand, it doesn’t have to. (#5)
I want feedback. It doesn’t have to be a number, but information about if there is something you need to do differently. (#6)
Vague Assessment of Fuzzy Criteria
Another strongly emphasized theme in the interviews concerns the inclusion of criteria for assessment of “Ambition, perseverance, and capability to organize studies,” and “Social skills.” Similar to the previous theme, there seems to be an acceptance among the participants for the idea to include a broader set of criteria and not only focus on achievement, but a deprecation of how it is enacted in practice.
One of the principal problems with these “fuzzy criteria,” as described by the participants, is that they do not know what is actually assessed. Some participants think that it is their personality that is being evaluated, which leads to a sense of resignation and loss of control:
I cannot become a completely new person. (#15)
The grade is not in our hands. We only do what we told to. (#9)
I shouldn’t be assessed on my personality. I should be assessed on what I do. (#4)
Other participants talk about vague assessments contributing to a “culture of silence,” where they are afraid of speaking their mind, since this could be considered displaying inadequate social skills. Rather, the participants focus on improving their “social performance:”
When you don’t understand the grading criteria, you have to invest in being nice. (#6)
Fawning as an educational strategy: I’ll attend the Christmas festivities in order to get a better grade. (#5)
Implicitly Conveying a Normative Notion of the “Good Student”
In relation to the inclusion of criteria for “Ambition, perseverance, and capability to organize studies” and “Social skills,” several participants also raise the question about normativity: What do the teachers envision when evaluating them as being social, ambitious, or capable of organizing their studies? Most participants perceive these criteria not only as fuzzy (“It can mean a lot of different things,” #17), but in need of being considered in relation to, and adjusted to, each individual. If not, there is a risk of assessments becoming normative, even though the participants have very different prerequisites. As an example, one of the participants contemplates about assessing planning skills:
Some teachers look at whether you submit your assignments in advance or not. But this depends on what is currently going on in your life. It does not mean I’m a bad planner./…/You may have an idea about what a good student is, but not everyone needs to plan their studies in the same way. (#1)
Most participants have similar thoughts on the assessment of social skills. In particular, they have either been explicitly told, or have sensed, that this criterion is especially important in relation to group work. A clear manifestation of having good social skills is when you help fellow participants during group work. Again, however, the Platonic ideal of group work and “learning together” clashes with the real world of the participants. For some participants, describing themselves as being autistic or introvert, the idea of being assessed on social skills is problematic because they think that they may be disadvantaged in group situations. A number of other participants describe what they perceive as an “unhealthy pressure” (#18) to help others, even in situations when you really need to focus on yourself:
You are forced to take responsibility for others during group work. (#19)
Unacknowledged Competition
A final theme that is very strongly emphasized in some of the interviews, is that the participants compete for the highest “grades.” As described above, although there are explicit grading criteria, the “grades” are also norm-referenced to enable national comparability. The overall mean for each individual college during an academic year must therefore be within a specified range. This effectively limits the number of high “grades” that can be awarded and introduces not only a competition for the highest “grades,” but also a paradoxical situation where the participants are asked to support their competitors. This is recognized in the interviews:
It is considered very important that we are friends and help each other, but at the same time we may reduce our own chances of getting high grades. (#5)
According to some participants, this paradoxical situation leads to behaviors where the participants help each other in front of the teachers, but create tensions below the surface:
It leads to a behavior where you help others in front of the teacher/…/At the same time, help is withheld via other channels, which the teacher does not see. It’s completely bizarre. (#10)
Discussion
The purpose of this study was to investigate how participants at adult education colleges perceive assessments in a context where grading is kept apart from assessments for formative purposes. Data on participants’ perceptions has been collected by interviewing a number of participants and the data has been analyzed with conventional thematic analysis. The findings from the analysis suggest that the participants are generally very pleased with the learning environment at the colleges, which they contrast with the “assembly line experience” from upper-secondary school. While the latter focus primarily on test results and grades, and thereby encourage superficial learning, adult education colleges to a greater extent foster deep and long-term learning, according to the participants.
However, their perception of the assessment situation differs from their generally positive perception of the learning environment. Although they recognize the benevolence of the ideal form, where the assessment is thought to support learning and personal development, they oppose to how it is enacted in practice.
One of the main themes in the data is that the participants of this study think that the lack of feedback on overall progress counteract their possibilities to regulate their learning. Their situation is therefore quite different as compared to the general situation in the Swedish school system, where students are virtually flooded with assessment feedback (although often in an aggregated, summative form). This means that, on the one hand, low-ability participants at adult education colleges are not constantly reminded of their failures, which could potentially save their academic self-confidence from deteriorating (e.g., Klapp, 2017; Anderson, 2018). On the other hand, the participants perceive that they have no point of reference for adjusting their academic self-confidence or self-efficacy. Although the participants do not seem to think that their performance decreases over time, as low-ability students’ achievement may do in school as a response to grading (Klapp, 2015), they are not able to take advantage of any progress during the academic year by, for instance, improving their self-efficacy or re-allocating their efforts to areas in need of improvement. Similarly, if the participants are not progressing as expected, they may not realize this before it is too late.
Other major themes in the data are the use of “fuzzy criteria” and implicit standards. While explicit and shared criteria, in theory, could bring a sense of agency to the students, by clarifying expectations and support strategies for self-regulated learning (e.g., Panadero et al., 2016), the participants think that the assessment of constructs such as ambition, perseverance, and social skills have several disadvantages, such as promoting stress and a “culture of silence,” since they do not know what is actually assessed (e.g., personality, performance, or conformity to norms). The uncertain nature of these constructs also risks establishing a norm about who a good student is, and how a good student should act, that does not take the individual differences into account and puts “unhealthy pressure” on the participants to conform to such a norm. The participants exemplify by describing situations where they act as they think is expected of them, even if they feel disadvantaged by this behavior themselves, for instance by helping others when they would in fact need time for their own learning. The fact that the “grades” are norm-referenced also means that the participants are asked to support their competitors, leading to behaviors where the participants help each other in front of the teachers.
This situation has several parallels to the narratives provided by school students in interview studies about assessment and grading (e.g., Löfgren and Löfgren, 2016; Sivenbring, 2016; Vogt, 2017). Although the grading system is different in adult education colleges, some of the effects on student behavior are apparently the same. In studies investigating students’ perceptions of assessment in Swedish schools, students claim to adjust themselves and their behavior to the different teachers and to the norms that apply in different classrooms. Since the grading criteria are difficult to interpret, and the grading process is both subjective and opaque, the students feel that they cannot rely on academic achievement alone. Instead, they need to “play safe,” by also employing social strategies and make a good impression in front of the teachers. The situation described by the participants in the colleges is in several respects no different, suggesting that the different grading systems have similar consequences for learners.
In the Swedish school system, students seem overwhelmed with formal assessment situations (such as tests), which generate formal feedback, often in the shape of grades or in a matrix format. Although stressful, high-performing students are able to cope with these formal assessments using productive self-regulation strategies (Löfgren et al., 2021). In adult education colleges, these formal assessment situations, and this kind of aggregated feedback, seem to be less prevalent. However, school students also testify to the presence of informal assessment situations, where students feel monitored and think that everything they say or do might affect their grades (e.g., Vogt, 2017; Hirsh, 2020). It is primarily in response to this latter situation, where they feel that they have to display a flawless surface, since they do not know when or on what grounds they are being assessed. This is very similar to the assessment situation experienced by participants at adult education colleges. Since adult education colleges have reduced the number of formal assessment events, it is not always clear to the participants how the informal assessments contribute to their “grade.” The final summative assessment is therefore constantly present in the minds of the participants, contributing to stress and promoting less productive study strategies.
Conclusion
Findings from this study suggest that assessments at adult education colleges are mainly informal, in contrast to the heavy reliance on formal assessment situations and provision of aggregated feedback prevalent in the other parts of the Swedish school system. This informal assessment practice has several advantages emphasized by the participants, such as the possibility to include tasks aiming for deep-processing and long-term learning, as well as providing space for individual choice. In short, most participants perceive adult education colleges as excellent institutions for learning and personal development.
However, the informal assessment practice also has some notable disadvantages. Although the participants do receive formative feedback, helping them to improve their performance on individual tasks, they perceive that there is a lack of feedback on overall progress, strongly limiting their possibilities to regulate their learning.
Another drawback is the lack of transparency in assessments, where participants at adult education colleges do not know when, or on what grounds, they are being summatively assessed. The use of “fuzzy criteria,” implicit standards, and (unacknowledged) conflicting interests in peer collaboration also contribute to stress among the participants. As a consequence, they feel compelled to employ social strategies to improve their chances of receiving high grades, by consciously adjusting their behavior and nurturing their social relationships with the teachers, in a way appearing quite similar to how students in the Swedish school system describe their situation. For participants at adult education colleges the situation may be even more pronounced, however, by the lack of feedback on overall progress.
Pedagogical Implications
That assessment can have a strong influence on student learning is nothing new (e.g., Säljö, 1975; Struyven et al., 2005), and as long as summative assessments are high stakes to students, students are likely to adjust their study strategies to increase their chances of improving the outcome. The important distinction to be made is whether these adjustments are productive in relation to what is to be learnt, or whether they divert students’ attention away from productive learning. Some of the cases discussed here typically involve the latter, since students feel compelled to make use of social strategies in order to improve their chances of being awarded good grades. This raises the question of how to minimize the use of such strategies in favor of more productive study strategies. The participants themselves suggest two important features that could assist in improving the situation: progress feedback and transparency in assessment.
First, an obvious solution to the problem of not receiving progress feedback could be to provide the participants with feedback on their progress within each specific subject or course according to the grading criteria, without transforming this feedback into a number along a grading scale. Under such circumstances, the ideas to base the grade on a joint decision by all the teachers at the end of the academic year, and to have a holistic grade for all subjects/courses, are not violated. However, the participants receive feedback on their progress, and they may also develop a deeper understanding of the criteria.
Second, although the participants themselves identified the use of criteria not relating to achievement (such as social skills and ambition) as a major problem, research on assessment within the Swedish school system suggests that the underlying problem may be the obscurity of how the multitude of informal assessment situations contribute to the final “grade.” Since the participants do not know when or how they are summatively assessed, they feel monitored and in need of showcasing what they think is a desirable behavior. A potential solution to this problem could therefore be to increase the transparency in relation to the assessment of these criteria, by discussing and possibly exemplifying how the criteria may be interpreted and assessed. This should preferably not be exaggerated by providing detailed rubrics, limiting the freedom and personal choice of the participants, as they are already critical toward the use of such instruments in the Swedish school system. Still, there is an obvious need to understand these criteria and how they are assessed, so that the participants can place their trust in the teachers and use their time more productively.
Limitations, Contribution, and Future Research
This study has several important limitations, which should be kept in mind when interpreting the findings. Most importantly, the study includes a limited sample of informants who volunteered to participate, where the choice to participate may have been influenced by negative experiences of assessment. The experiences and perceptions of these participants may therefore not be representative for a larger sample of participants at adult education colleges.
The sample is also from a limited geographical region, which was thought to increase the likelihood of the colleges represented having similar assessment practices, making the interviews comparable across colleges. Whether the findings are generalizable to other colleges, or other geographical regions, is not known.
The main contribution of this study is the investigation of perceptions of assessments in a context where grading is kept apart from assessments for formative purposes, identifying problems which may have a negative influence on participants’ learning and motivation. Interestingly, participants’ perceptions of assessment in this study have apparent similarities to students’ perceptions of assessment in the Swedish school system, despite the many differences in assessment contexts. This raises questions about how persistent the effects of summative assessments and grading are. For example, how are students’ perceptions affected by differences in the relative emphasis given to assessments for formative and summative purposes? In the case described here, although there was a strong emphasis on formative assessments and learning, while grading was kept at a distance, the grades were still present in the minds of the participants during the whole academic year. Does this mean that students’ perceptions of assessment are likely to be similar in most contexts, as long as there are summative elements present at some point in time? And, if this is the case, do summative assessments need to be removed completely in order for students to focus more exclusively on learning, or can they be balanced with formative assessments in some way? These questions are of great importance to teachers in most educational contexts, as well as to future research, if we want to avoid the negative consequences of assessments and optimize the teaching for student learning and motivation. The use of thoughtful interventions to address these questions, and investigate alternative solutions, would therefore be a welcome contribution for research and pedagogical practice alike.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics Statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Consent was given orally and documented on digital video.
Author Contributions
AJ: confirms sole responsibility for study conception and design, data collection, analysis and interpretation of results, and manuscript preparation.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
- ^ At these colleges, the students are generally referred to as “participants.”
References
Anderson, L. W. (2018). A critique of grading: policies, practices, and technical matters. Educ. Policy Anal. Arch. 26:49. doi: 10.14507/epaa.26.3814
Bennett, R. E. (2011). Formative assessment: a critical review. Assess. Educ. Principles Policy Pract. 18, 5–25. doi: 10.1080/0969594X.2010.513678
Black, P., and Wiliam, D. (1998). Assessment and classroom learning. Assess. Educ. Principles Policy Pract. 5, 7–74. doi: 10.1080/0969595980050102
Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa
Butler, R. (1988). Enhancing and undermining intrinsic motivation, the effects of task involving and ego-involving evaluation on interest and performance. Br. J. Educ. Psychol. 58, 1–14. doi: 10.1111/j.2044-8279.1988.tb00874.x
Carless, D. (2016). “Scaling up assessment for learning: Progress and prospects,” in Scaling up Assessment for Learning in Higher Education, eds D. Carless, S. M. Bridges, C. K. Y. Chan, and R. Glofcheski (Germany: Springer), 3–17. doi: 10.1007/978-981-10-3045-11
Clarke, V., and Braun, V. (2017). Thematic Analysis. J. Positive Psychol. 12, 297–298. doi: 10.1080/17439760.2016.1262613
Dunn, K. E., and Mulvenon, S. W. (2009). A critical review of research on formative assessment: the limited scientific evidence of the impact of formative assessment in education. Pract. Assess. Res. Eval. 14, 1–11. doi: 10.7275/jg4h-rb87
Gibbs, G. (2010). Using Assessment to Support Student Learning. Leeds: Leeds Metropolitan University. https://eprints.leedsbeckett.ac.uk/id/eprint/2835/ (accessed December 10, 2021).
Harlen, W. (2005). Teachers’ summative practices and assessment for learning tensions and synergies. Curriculum J. 16, 207–223. doi: 10.1080/09585170500136093
Harlen, W., and Deakin Crick, R. (2003). Testing and motivation for learning. Assess. Educ. Principles Policy Pract. 10, 169–207. doi: 10.1080/0969594032000121270
Harlen, W., and James, M. (1997). Assessment and Learning: differences and relationships between formative and summative assessment. Assess. Educ. Principles Policy Pract. 4, 365–379. doi: 10.1080/0969594970040304
Hirsh, Å (2020). When assessment is a constant companion: students’ experiences of instruction in an era of intensified assessment focus. Nord. J. Stud. Educ. Policy 6, 89–102. doi: 10.1080/20020317.2020.1756192
Högberg, B., Lindgren, J., Johansson, K., Strandh, M., and Petersen, S. (2021). Consequences of school grading systems on adolescent health: evidence from a Swedish school reform. J. Educ. Policy 36, 84–106. doi: 10.1080/02680939.2019.1686540
Jonsson, A., and Svingby, G. (2007). The use of scoring rubrics: reliability, validity, and educational consequences. Educ. Res. Rev. 2, 130–144. doi: 10.1016/j.edurev.2007.05.002
Kifer, E. (1975). Relationships between academic achievement and personality characteristics: a quasi-longitudinal design. Am. Educ. Res. J. 12, 191–210. doi: 10.3102/00028312012002191
Klapp, A. (2015). Does grading affect educational attainment? A longitudinal study. Assess. Educ. Principles Policy Pract. 22, 302–323. doi: 10.1080/0969594X.2014.988121
Klapp, A. (2017). Does academic and social self-concept and motivation explain the effect of grading on students’ achievement? Eur. J. Psychol. Educ. 33, 355–376. doi: 10.1007/s10212-017-0331-3
Koenka, A. C., Linnenbrink-Garcia, L., Moshontz, H., Atkinson, K. M., Sanchez, C. E., and Cooper, H. (2019). A meta-analysis on the impact of grades and comments on academic motivation and achievement: a case for written feedback. Educ. Psychol. 41, 922–947. doi: 10.1080/01443410.2019.1659939
Lauvås, P., and Jönsson, A. (2019). Ren Formativ Bedömning [Purely Formative Assessment]. Sweden: Studentlitteratur.
Levinsson, M., Hallström, H., and Claesson, S. (2013). Problems in developing formative assessment: a physics teacher’s lived experiences of putting the ideas into practice. Assess. Matters 6, 116–142. doi: 10.18296/am.0108
Lipnevich, A. A., and Smith, J. K. (2009). “I really need feedback to learn:” students’ perspectives on the effectiveness of the differential feedback messages. Educ. Assess. Eval. Account. 21, 347–367. doi: 10.1007/s11092-009-9082-2
Löfgren, H., Alm, F., Jönsson, A., Hultén, M., Markström, A.-M., and Lundahl, C. (2021). Betyg i Årskurs 4: En studie om Bedömningspraktikerna på Skolor Som Deltagit i Försöksverksamhet Med Tidigare Betyg. [Grades in Year 4: A Study of the Assessment Practices in Schools that Have Participated in the Trial with Earlier Grades]. Fleminggatan: The Swedish National Agency for Education.
Löfgren, R., and Löfgren, H. (2016). att få Sina Första Betyg: En Rapport om Elevers Berättelser om Sina Erfarenheter av att få Betyg i Årskurs 6. [Getting Your First Grades: A Report on Students’ Narratives About their Experiences of Getting Grades in Year 6]. Fleminggatan: The Swedish National Agency for Education.
Nygren, G. (2021). Jag vill ha Bra Betyg. En Etnologisk Studie om Höga Skolresultat och Högstadieelevers Praktiker [(I Want Good Grades. An Ethnological Study of Strong School Results and Practices of Pupils in Lower Secondary Education]. [Ph.D thesis].Sweden: Uppsala University
Panadero, E., and Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: a review. Educ. Res. Rev. 9, 129–144. doi: 10.1016/j.edurev.2013.01.002
Panadero, E., Jonsson, A., and Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: four meta-analyses. Educ. Res. Rev. 22, 74–98. doi: 10.1016/j.edurev.2017.08.004
Panadero, E., Jonsson, A., and Strijbos, J.-W. (2016). “Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation,” in Assessment for Learning: Meeting the Challenge of Implementation, eds D. Laveault and L. Allal (Germany: Springer), 311–326. doi: 10.1007/978-3-319-39211-0_18
Pérez Prieto, H., and Löfgren, H. (2017). Att Ständigt bli Bedömd: Elevers Berättelser om Betyg och Nationella Prov [To be Constantly Assessed: Students’ Narratives About Grades and National Tests]. Sweden: Studentlitteratur.
Säljö, R. (1975). Qualitative Differences in Learning as a Function of the Learner’s Conception of the Task. [Ph.D thesis].Sweden: University of Gothenburg.
Sivenbring, J. (2016). I Den Betraktades Ögon: Ungdomar om Bedömning i Skolan. [In the Eyes of the Beholder: Young People About Assessment in School]. [Ph.D thesis] Sweden: University of Gothenburg.
Smith, E., and Gorard, S. (2005). “They don’t give us our marks”: the role of formative feedback in student progress. Assess. Educ. Principles Policy Pract. 12, 21–38. doi: 10.1080/0969594042000333896
Struyven, K., Dochy, F., and Janssens, S. (2005). Students’ perceptions about new modes of assessment in higher education: a review. Assess. Eval. Higher Educ. 30, 325–341. doi: 10.1080/02602930500099102
Swedish National Agency for Education (2015). Skolreformer i Praktiken: Hur Reformerna Landade i Grundskolans Vardag 2011-2014. [School Reforms in Practice: How the Reforms Landed in Everyday School Life in 2011-2014]. Fleminggatan: The Swedish National Agency for Education.
Swedish National Agency for Education (2018). Skolverkets Allmänna råd med Kommentarer. Betyg Och Betygsättning. [Common Advice of the Swedish National Agency for Education With Comments. Grades and Grading]. Fleminggatan: The Swedish National Agency for Education.
Swedish National Council of Adult Education (2017). Anvisningar för Folkhögskolans Studieomdöme. [Instructions for Assessment in Independent Adult Education Colleges]. Sweden: The Swedish National Council of Adult Education.
Vogt, B. (2017). Just Assessment in School: a Context-Sensitive Comparative Study of Pupils’ Conceptions in Sweden and Germany. [Ph.D thesis].Sweden: Linnaeus University.
Keywords: adult education, grading, summative assessment, perceptions of assessment, self-regulated learning
Citation: Jönsson A (2022) Perceptions of Assessment: An Interview Study of Participants’ Perceptions of Being Assessed in Swedish Adult Education Colleges. Front. Educ. 7:836334. doi: 10.3389/feduc.2022.836334
Received: 15 December 2021; Accepted: 17 March 2022;
Published: 25 April 2022.
Edited by:
Robbert Smit, St. Gallen University of Teacher Education, SwitzerlandReviewed by:
Ricky Lam, Hong Kong Baptist University, Hong Kong SAR, ChinaSylvi Vigmo, University of Gothenburg, Sweden
Copyright © 2022 Jönsson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Anders Jönsson, anders.jonsson@hkr.se