- 1Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands
- 2School of Education, HAN University of Applied Sciences, Nijmegen, Netherlands
- 3Faculty of Educational Sciences, Open University, Heerlen, Netherlands
Most courses in higher education finish with one or more assessments which commonly all have to be passed. In these courses, student learning is commonly measured using conventional classroom tests, therefore test preparation is a common task for students. In higher education, compared with students’ prior education, they are faced with a more complex curriculum and have to perform their studies with less guidance and limited resources. Therefore, effective and efficient test preparation is important. A strategy to help students study effectively in the context of test preparation is to make the appropriate control decisions, for instance to cease test preparation on specific content and (re) study other subjects that need attention. These control decisions are an important psychological aspect of the test preparation study process. We conducted a qualitative study on how students made control decisions in a test preparation period for a knowledge test in Educational Sciences. The study was conducted with students of a teacher training program at a University of Applied Sciences in the Netherlands. Results show that different progressions of learning judgments and the self-efficacy of students led to two different saturations. This in turn led to students making either no, inaccurate, or accurate control decisions. This article discusses the impact and practical implications of these insights.
Introduction
Higher Education (HE) is designated with the task to develop expertise in students (Ericsson et al., 1993). Most courses in HE are organized in educational modules which finish with one or more tests that, commonly, all have to be passed (Van der Vleuten et al., 2018). Testing is defined by Markus and Borsboom (2013, p. 2) as “any technique that involves systematically observing and scoring elicited responses of a person or object under some level of standardization.” Testing involves therefore a data collection mechanism that samples from a module from which summative inferences can be legitimately drawn about the quality of performance in that module (Brown, 2019). Examples of these tests include practical assessments, reflection reports or knowledge tests. The system when using these tests to draw conclusive inferences, is referred to as a summative assessment system. In many courses, students’ quality of performance is measured by a high number of summative assessments aimed at measuring the cognitive component of education (Van der Vleuten and Schuwirth, 2005; Broekkamp and Van Hout-Wolters, 2007; Schuwirth and van der Vleuten, 2011; Jessop and Tomas, 2017; Baartman et al., 2022). In addition, these measurements are often expressed in grades (Kitsantas et al., 2008; Young and Fry, 2012; Dent and Koenka, 2016; Ohtani and Hisasaka, 2018). In the context of this summative assessment system, success of learning is commonly defined as ‘passing the test’, and therefore test preparation is a common task for HE students (Broekkamp and Van Hout-Wolters, 2007; Jessop et al., 2014).
HE students, compared to their prior education, are faced with a more complex curriculum and perform their study actions more individually and with less guidance from teachers (Bruinsma, 2004; Cazan, 2012). In addition, resources like time and effort available to the students are limited. Moreover, educational modules and their related assessments, are commonly programed parallel to other modules. This means that students have to prepare for multiple assessments simultaneously. As a result, test preparations not only have to be effective, in our case to pass the test, but also efficient so students can distribute limited resources sensibly (Efklides, 2014; Ben-Eliyahu and Bernacki, 2015; Kelly-Laubscher and Luckett, 2016). An important strategy to enable distribution of limited resources is to make the decision to cease test preparation on specific content and to (re) study other content that needs attention (Bangert-Drowns et al., 1991; Garrett et al., 2007).
Sadler states there are three indispensable conditions for learning for a test: (a) understanding the standard of performance, (b) information about the performance gap, and (c) strategies to remedy that gap (Sadler, 1989, p. 121). This implies that to effectively and efficiently pass a test, (a) students must have a notion of what their goal is (the standard of performance), (b) students must make accurate monitoring judgments of their own learning (to establish the performance gap), and (c) students are able to control test preparations (apply appropriate strategies to remedy that gap; Efklides, 2014). In cases where students are able to control their test preparations, they can convert those judgments into learning strategies that will pay off in the context of test preparation (Metcalfe, 2009). Accurate monitoring judgments can lead to accurate control decisions.
If control is to be effective then monitoring should inform what needs to be done, it should be appropriate for the context and it should be accurate (Efklides, 2014). However, accuracy is impossible to foresee without taking into account the actual test outcome (Roebers, 2002; Dholakia, 2017). Monitoring accuracy can be defined as comparing judgments with actual performance (Roebers, 2002; Engelen et al., 2018). However, when students have to make control decisions during test preparation, this kind of accuracy is of no avail to students. For learning to be effective, students have to make high quality decisions before knowing the final test outcome (Sadler, 1989; Egan, 2015). To ascertain whether a decision is of high quality, students’ focus should not be on the test outcome, but on the effective control decision-making process to cease learning so time and effort can be devoted to other subjects (Dholakia, 2017).
Efklides (2014) states that if control is to be effective, monitoring judgments should accurately represent student learning, and control should inform the student when test preparation is sufficient so that efforts can be focussed elsewhere. This implies that students must make a conscious high quality control decision and cease studying specific content. Monitoring learning and control are important parts of metacognition, the knowledge of one’s own learning (Nelson and Narens, 1990). Although metacognition has been defined in various ways by researchers through the years, these definitions are always relatively close to its original meaning (Efklides and Vauras, 1999). Two key components are derived from these definitions; (1) awareness and knowledge of students’ own learning and (2) the control that students wield over their own learning. Awareness is difficult to define objectively because it is a subjective experience (Merikle, 1984). Moreover, monitoring and control can actually operate without much awareness (Reder and Schunn, 1996). However, being consciously aware is also an important factor for it provides the input for the metacognitive processes like control decisions (Efklides, 2011). We therefore defined awareness in line with Henley (1984) as students being conscious of their learning strategies and the control decisions they make. The control that students wield over their own learning is defined as, “The knowledge and control children have over their own thinking and learning activities” (Cross and Paris, 1988, p. 131). Classrooms are full of students with varying levels of consciousness about how they learn (Young and Fry, 2012). That does not mean that students are either conscious learners or not all the time, although is it common in literature to approach awareness as something that is or is not present (Schraw and Dennison, 1994; Reder and Schunn, 1996; Hughes, 2017). However, in a specific context, students can either explain why they choose to rehearse, or they simply did rehearse, not knowing why, maybe just because that is what they know. Therefore, aware of selecting a specific learning strategy for a specific learning goal, is something that you either are, or not (Henley, 1984; Merikle, 1984). In a specific context, awareness can be dichotomous.
It is well established that awareness and metacognition play an important role in HE learning as it affects learning strategies like monitoring and control decisions (Butler and Cartier, 2004; Lai, 2011; De Bruin et al., 2016). Moreover, awareness is a key factor in metacognitive-skilled students (Nelson and Narens, 1990; Schraw and Dennison, 1994; Efklides, 2011). Monitoring can be defined as “attending to and being aware of comprehension and task performance,” while control (regulation) can be defined as “identification and selection of appropriate strategies and allocation of resources” (Lai, 2011, p. 7). Monitoring functions as the students’ source for their judgment of learning (Nelson and Narens, 1990; Efklides, 2014), but the actual selection and application of learning strategies is regarded as a separate process. Among these control strategies, the conscious decision to cease studying specific content and (re) study other content that needs attention is a control skill to make test preparation more efficient. To make such a decision consciously in a summative assessment setting, students not only have to be aware of their learning and be able to make a judgment on specific content, but also be able to decide whether this judgment is sufficient to pass the test.
However, research findings indicate that many HE students lack effective learning strategies and therefore do not study effectively (Heikkilä et al., 2012; Meusen-Beekman et al., 2015; Virtanen et al., 2015; De Bruin et al., 2017; van de Pol et al., 2019). Moreover, Efklides (2014) states that being aware of their learning does not imply that students are able to actually take control and cease studying to focus their efforts elsewhere. Control decisions are often influenced by considerations other than monitoring judgments, for instance by motivational, affective, cognitive or volitional factors (Efklides, 2011). Motivational factors may include goal orientation and self-efficacy. Goal orientations of students are known to be of great influence on study behaviour of students (Van der Linden et al., 2021). For example, they have an influence on effort and can be divided in mastery or performance orientations (Pintrich, 2000). Self-efficacy is a subjective judgment of a student’s level of competence in executing certain behaviors like control decisions (Bandura, 1997; Zimmerman, 2008). Although these other considerations and resource strategies like effort have some relation with taking control decisions, evidence suggests that the relations between monitoring and control are not as close as could be expected (Kyndt et al., 2011; Efklides, 2012). Therefore, even if students are aware of and can accurately monitor their learning, this awareness does not automatically translate into proper high quality control decisions that benefit the learning process in test preparation.
It remains unclear if there are differences between unaware and aware students in relation to control decisions like ceasing test preparations, and if aware students are better at making this decision. Most studies about monitoring learning focus on methods for assessing the impact of instructional practices rather than if, how, and when students make control decisions (Garrett et al., 2007; Dinsmore and Parkinson, 2013; Van Loon, 2014). Van Loon therefore advises researchers to ‘not only investigate effects of instructions on monitoring and restudy selections, but also to investigate how monitoring and regulation are related to achievement’ (Van Loon, 2014, p. 168). This is in line with Foster et al., who stated ‘research that attempts to better understand the bases of students’ exam predictions may ultimately inform how to improve overall student achievement’ (Foster et al., 2016, p. 14).
The main goal for this research is to explore if, how, and when students make control decisions to cease studying specific content when preparing for a specific summative test. Since it is known that students aware of the way they learn generally make more high quality monitoring judgments, differences between aware and unaware students can be used to better understand if, how, and when students come to execute control decisions and make the decision to cease studying for a summative test. The research question therefore is: How do aware and unaware students make control decisions to cease studying for a test?
Materials and methods
Qualitative data were used to establish students’ awareness status and to explore how students come to control decisions when studying for a test. Students’ learning perceptions and test preparations on content, monitoring, and controlling their test preparations for an achievement test were studied. The main data source consisted of qualitative self-report measures in the form of interviews (Creswell, 2014). The advantage of this constructivist grounded theory approach is that participants and researchers both add value to the interpretation of the data (Charmaz, 2006; Boeije, 2014). In addition, focussing on the student learning process toward a summative assessment enhanced ecological validity (Lai, 2011). The constructivist part of this approach implies that relevant literature on metacognition, monitoring, assessment, feedback, and self-directed learning influenced the development of the research questions, the interview guideline and the data analyses. Despite the knowledge that retrospective reports on metacognition have limitations, for the exploratory nature of our research question, interviews are the most fitting means of collecting data (Akturk and Sahin, 2011). Interviews enable an in-depth investigation of students’ retrospective judgments which correlate with actual performance accuracy (Chua et al., 2009).
Because the literature states that students differ in their goal approach and in their metacognitive control trough awareness, we chose to juxtapose aware students and unaware students.
Because awareness could only be established during the interviews, a questionnaire was used to make an attempt to establish a preliminary awareness status before the interviews. Our goal was to alternate interviews from different student types so differences could emerge from the start. Also the grades scored for the pertaining achievement tests were collected. Our attempt to establish awareness status beforehand proved futile. Therefore, awareness of learning was only indicated in the interviews when, for instance, students forethought and selected certain self-regulated learning (SLR) strategies, which was an indication of being conscious of their learning. A distinction can be derived qualitatively by analyzing the deliberateness of the students’ SRL actions. Our estimates are captured in the column Awareness of learning in Table 1 for each student and in Table 2 for groups. The data from the questionnaire were mainly used to emphasize certain topics within the semi-structured interview guideline.
Design
Individual semi-structured interviews were chosen as the main data source. This allowed us to address topics of interest related to motivation and learning strategies, and it allowed the students to speak freely. A semi-structured interview guideline was constructed based on the literature.
To ensure students would differ in their learning, monitoring and control decisions, both aware and unaware students were interviewed. Before being interviewed, students answered questions from the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1991). The MSLQ was chosen because of its proven ability to offer insights into students’ motivation and learning strategies (Pintrich et al., 1991; Duncan and McKeachie, 2005; Lee et al., 2010; Tock et al., 2017).
Qualitative interviewing is a self-report technique, in contrast to the observation of actual monitoring behaviour (e.g., thinking-aloud protocols) (Veenman, 2011; Panadero et al., 2016). Hence, this method presents disadvantages that we compensated for by implementing specific actions. First, to reduce memory distortion, we interviewed students during the semester, a few weeks before the exam period. Moreover, this provided insights in the ongoing (meta) cognitive activities. Second, we used a semi-structured interview based on the four areas of regulation in the second phase of self-regulated learning (monitoring) from Pintrich (2000), which helps identify individual variations, combining the students’ perspectives with the topics we considered relevant. Third, we did not specify and address the monitoring and control activities, so participants could use their own words. Further, we asked participants detailed questions, partly based on the outcome of the MSLQ, in order to gain insights into how they arrived at control decisions. This resulted in extensive and highly detailed transcripts.
Subjects
We used a purposive sampling strategy for the interviews. Participants were recruited from teacher-training programs at a large University of Applied Sciences in the Netherlands. These four year bachelor programs consists of two semesters per year. Each semester comprises two periods. There are both modules with a duration of a period and a semester. The context of the interviews was the test preparation period for a norm-referenced knowledge test (Bond, 1995) in Educational Sciences, which is the same for all programs.
Second and third-year students were sent an email introducing the study and inviting their participation. Second and third-year students were approached because they have proved their ability to study for tests. First-year students were excluded because they are not familiar enough with the assessment system and therefore our estimate was that they could not provide the needed insight. Fourth-year students do not take conventional achievement tests in their final program year, and therefore were not invited.
Eighteen students (eight male and ten female) from seven teacher educational programs (e.g., Biology, History, German) participated in individual semi-structured interviews with a researcher, lasting 42–68 min. The average student age was 21.7 years (SD = 1.94). Fifteen students had completed secondary education, while three entered HE with a background in vocational education. Due to scheduling issues, one interview was conducted with two students from the same program.
Interviews
The interviews were aimed at getting insight in learning for a test, if the students showed a conscious selection of learning strategies (awareness of learning) and if and how students came to a conscious control decision. The outcome of the MSLQ was used to be able to go in depth on SRL in the interview on certain topics, for instance the degree of SRL or the help from peers. Open-ended questions were used to examine participants’ perspectives on SRL strategy use, including control decisions. Students were asked to describe learning experiences during test preparation, including how, when, and where learning had occurred, how they monitored and controlled their study process, if and how peers were involved, which learning strategy was used, what goals were pursued, and whether or not these goals were achieved.
For the first four interviews, questions were posed by two interviewers: a trained interviewer and the first author. After the first two, the interviews were evaluated. The evaluation indicated that the semi-structured interview guideline was adequate, so the guideline was set. For the remaining interviews, open-ended questions were posed by one interviewer (the first author). Non of the interviewers were involved in a teaching role for the participating students at the time of the interviews.
Procedure
The study was approved by the Ethical Research Committee of the first author’s second mentioned university, approval number ECO 283.06/21. After students replied to the invitation email, we obtained their consent to check their transcript, and conducted the MSLQ in a questionnaire. Before interviews were planned, two researchers discussed the analysis of the MSLQ and the transcript. Interviews were recorded and transcribed verbatim, and any identifying data was then removed from the original interview transcripts.
Data analysis
The summative assessment system at this university entails that all courses must end in an exam, which students pass if they achieve a grade higher than 5.5 out of 1–10. The knowledge tests for Educational Sciences consisted of 40 multiple choice items. Grades from all taken Educational Sciences knowledge tests were taken from the students’ transcripts and means and standard deviations were computed using SPSS version 22. The students’ motivation and learning strategies were measured using the MSLQ. The scales of the MSLQ were calculated according to the method prescribed by Pintrich et al. (1991) using SPSS version 22. The scales were constructed by taking the mean of the items that make up that scale. Students rated themselves on a seven-point Likert scale from “not at all true of me” to “very true of me.”
The interview data were analyzed using template analysis (Brooks et al., 2014; Creswell, 2014) using Atlas-Ti 8. Initial analyses started with two transcripts: the transcripts were read in detail, and emerging themes were identified. The remaining transcriptions were then analyzed. The analysis also involved inductive components analysis, in which the four themes where identified. The template analysis focused on differences and similarities between aware and unaware students with regard to the four themes: goal orientation, monitoring learning, learning strategies and action (see Table 3 for the coding template). Using this coding scheme, we were able to thematically organize and classify the data. Awareness status was derived from the qualitative data by analyzing if students made conscious and confident choices about learning strategies or control decisions. The analysis can be seen as a deductive approach, using the themes as a template.
The interpretation of the authors in the analysis served the goal of reaching an in-depth understanding of how students came to make control decisions. Template analysis is systematic, but always subjective. Our design is a constructivist grounded theory approach which does not require a consistent estimate of the same phenomenon (LeCompte and Goetz, 1982; Cheung and Tai, 2021). We acknowledge that data in this study are co-constructed by interactions with the participants, as are the interpretations and meaning we gave to these data (Watling and Lingard, 2012). We used a constant comparison method to establish the reliability (Charmaz, 2006). The first two authors discussed the emerging codes from the first analysis with the third and fourth author. Codes were merged into groups which led to themes in iteration with all authors. The first author (JBL) then recoded all interviews according to the latest themes. This provided an overview per theme which was used as a basis for the description of the results.
Results
During the interviews and the analyses, students showed whether or not they made conscious and confidant choices. From these choices it became apparent which students were aware and which students were unaware. Five from the eighteen students showed that they were aware by making conscious and confident choices. All other students where not aware and did not make conscious and confident choices. Instead, most of these students “choose” learning strategies based on familiarity.
We analyzed the quantitative data by running a Point-Biserial Correlation using SPSS version 22, to determine the relationship between the awareness status and the outcome of the MSLQ and the grades. There where no significant correlations between awareness status and the grades or the MSLQ.
Students come to making control decisions in a four step iteration; (1) goal orientation, (2) monitoring learning, (3) learning strategies and (4) action. From the qualitive data, three main aspects that influence an high quality control decision were identified: progression of learning judgments, students’ self-efficacy, and saturation. The student characteristics are presented in Table 1.
Progression of learning judgments
As students learned during test preparation, they experienced a variety of cues: how difficult something appears, recognition, and comments from peers. All these cues provided information about the progression of learning. Both aware and unaware students used these various cues first to make judgments about their progression of learning for a specific subject being studied at a particular moment. We distinguish between those students who are aware and those who are unaware in the way that these cues are collected.
Unaware
Unaware students do not consciously collect cues to inform their learning progress. However, this does not mean that they do not encounter cues. Consider Anna, who attends a voluntary interrogation session spontaneously initiated by students who happen to meet up before class. There she becomes aware of the subjects that she knows or those she does not (yet) know:
Yes, I do become aware of things that I do or do not know. Because if other students mention things, and you’re like, oh, I’ve no idea what this is about, you know that, ok, I really have to pay attention to this. But if, for example, you have a subject, or if they ask a question, and you can explain it, or name it, or give the meaning, or whatever, then you know for yourself, okay, I’m in the right place here (Anna, unaware).
To be able to make a judgment, these kinds of cues are sufficient at that specific moment for students to decide whether or not they have mastered a certain topic.
Aware
Aware students differ from unaware students in that they consciously seek out moments where they can confirm their progression of learning of what they have studied. Where Anna came across the interrogation session because she was a bit early for class, aware students like Neline consciously planned a session with one of her peers. She chose not just any peer, but one suited for the task at hand; confirmation of a certain topic:
Then my classmate, who I have a high opinion of, explained something I already knew. So for me, that was the confirmation that I know it (Neline, aware).
A valued peer explains something, but the student is aware that she already knows the answer. This tells her something about the state of knowledge on that particular topic, a judgment of the progression of learning.
Both aware and unaware students become aware of their progression of learning regarding test preparations for a specific topic, but aware students make sure that this awareness is not dependent on coincidental cue encounters. Note that this progression affirmation is limited to specific topics and does not extend to the whole test.
The self-efficacy of students and the value attribution of cues
From the analysis we derived that cues are important carriers of information for students to make a judgment about their learning progression. However, even when students do perceive cues, they do not naturally make an high quality control decision. Results show that some students do not attribute any value to these cues when confirming their test preparation, and therefore do not make a control decision and continue studying, thus using time that could be spent otherwise. It appears that the value awarded to a cue influences the decision to continue or cease studying, and not the cue itself. Whether students attribute a confirmational value to a perceived cue is mostly dependent of their self-efficacy. We substantiate these claims in the following sections.
Unaware
When unaware students perceive cues, they often attribute no value related to test preparation progress. They do not always attribute a confirmational value to a cue that could confirm learning. For example, despite Moniek’s low self-efficacy, she gets high grades but shows signs of limited metacognitive ability. She tries to catch up with a peer who, in her eyes, does a good job at summarizing using a card system that she can also use:
To gain confirmation, I always try to catch up with that girl with the cards [a student who makes and uses summary cards to perform self-assessment] before the exam. I never know if she’ll already be there because I don’t talk to her very much … so I start studying myself, but then she comes, because she’s always early. (Moniek, unaware).
Although Moniek seeks confirmation from her peer, she does not attribute any confirmational value to her peer’s cues with regard to the decision to cease studying. Moniek indicates that she continues to study, even after a confirmational cue. Other students, like for example Anna, show similar behaviour:
So I study until the test starts, yes, really until the test (Anna, unaware).
Moniek illustrates this continuous studying by stating that it only becomes clear to her after the test that her judgment had in fact been accurate:
For me, whether I studied well is only clear in the end, when I get my grade. Then I can see it (Moniek, unaware).
This example shows that despite confirmational cue seeking behaviour, these cues are not awarded the value of ‘confirmation of mastering’ and therefore do not lead to an high quality control decision, apart from the inaccurate monitoring process of unaware students. Moniek studies right up to the start of the exam, despite seeking confirmation about her study progression with her peer. It is notable that this confirmational value is often missing in students with low self-efficacy, as illustrated by Moniek:
I never expected to be here (at university), coming from lower vocational education (Moniek, unaware).
Results show that self-efficacy can have a substantial impact on the value attribution of cues, and hence also on the effectiveness and efficiency of the test preparation process.
Aware
Aware students deliberately seek out and perceive cues that can confirm study progression, but not all aware students make the appropriate control decision to cease studying. Some students do not attribute any confirmational value to these cues and do not make a control decision, and therefore they continue their study. Emma is classified as an aware student because she shows clear signs of having a deliberate learning strategy and because she is able to make high quality judgments:
Let’s see … while learning I know that I know something; what I often do is, for example, for Subject 1, then I take those learning goals and then I read that learning goal and then I just say out loud to myself: Okay, it’s about here, here, here, here and here, and then I take the summary again: Oh yes, I’ve named that, I’ve named that, oh, not that yet. Then I say that out loud again, or even twice and that’s basically how I finish them. It’s actually a kind of rehearsal (Emma, aware).
Although she can clearly make an high quality judgment, she does not attribute a confirmational value to the cues that could inform her to make a control decision to cease studying.
I have to say that it’s quite difficult to know whether or not I’ve studied enough for the test. I always keep on studying and studying until the test begins. Even in the final hour, we sit down with a few of my classmates – yes, study mates – we sit together and we go over it again, you know. I’m often studying until the last second, minute. Whether that’s healthy is another thing but actually yes, until the last moment. I can always go over it again, doubt myself (Emma, aware).
Even during and after the test she has doubts about the upcoming results, despite the confirmation of studying specific contents:
No, indeed, while learning, then I know very easily, oh yes, but that and that is it. That’s a signal for me, hey apparently I know that too. But I don’t have that during an exam. Then I’m just filling everything in and then I think at the end: I was able to fill in everything. But does that feel like a relief? No, not that. I’m also always someone, when I’ve finished the exam, who reads it 2 or 3 times before I hand it in (Emma, aware).
This doubt has an enormous impact on her ability to predict the outcome of the test, and many other tests, despite regularly achieving high grades.
The grades often surprise me. When I’ve passed the test, I often doubt whether I passed it or not. And when I’ve had a test, I’ll compare with others: what did you fill in and such. And then you often start to doubt, is he right or is she right or am I right? And that makes me think a lot: Yes, now I’m no longer sure. Then I often think I didn’t pass, while I usually do, but then I’m a bit surprised on the one hand. Then I think: Oh, then I really had more correct than I thought (Emma, aware).
Although Emma evidently is very capable of making high quality judgments, her self-efficacy has an impact on the confirmational value that she attributes to the cues she encounters during test preparation. In contrast, aware students with high self-efficacy do award a confirmational value to cues in relation to test preparation followed by control decisions.
By awarding a confirmational value to cues and the subsequent control decision, Iris regulates the amount of time and effort she spends on test preparation:
It was as if we were looking for a way to study for just a pass. At least that’s how I saw it: not too little effort and not too much. And I thought: that’s exactly what I do. That’s my strength. Don’t put too much effort into studying and still pass; that I can do well (Iris, aware).
Attributing the right value to cues appears to be a separate metacognitive skill, dependent on the students’ self-efficacy. Although Iris and Emma are both classified as aware, they vary in the value they attribute to cues, therefore also in confirming whether their test preparations are sufficient to pass a test. They consequently make different control decisions. This influence of self-efficacy is seen in both aware and unaware students.
Saturation through the accumulation of judgments
Due to multiple simultaneous assessments at the end of an educational module, efficient test preparation entails that students are not only able to make control decisions about individual topics, but also about the whole upcoming test. During test preparation, students compare their learning progression with their mental image of what will be asked in the test. This image is formed during the course and comprises different sources: learning goals, things pointed out by teachers in classes, peers, literature, etc. There are however clear differences between aware and unaware students in the details and the quality of these mental images.
Based on the accumulation of the progression of learning judgments, the value attributed to the cues encountered during test preparation, and their mental image, students estimate whether their test preparation had been sufficient to pass the test. If the judgments and value attribution of cues are aligned, saturation can occur and students are able to draw the conclusion that their test preparation will suffice to pass the test. These students then can make a control decision to cease test preparation and can devote their time to other tests. We will refer to this kind of saturation as content saturation. If a student is unable to make this estimate, studying will either continue without attributing to move beyond the pass grade or the control decision to cease is made before it is certain that the necessary learning level has been reached. This ceasing of test preparation without content saturation is also found in students. This is also a form of saturation in the sense that further investments in test preparation are no longer seen as viable, but without the certainty that the test will be passed, often remarked as ‘fed up with learning’. We refer to this type of saturation as effort saturation. Content saturation is based on the confirmation of learning from cues and value attribution of these cues which aids the gradually growing cognizance that learning will be sufficient to pass the test, thereby presiding over the learning process. In contrast, due to students’ lack of learning confirmations, effort saturation is based on the notion that effort is limited and that unabated investments are futile.
Unaware
Unaware students form low quality and rough mental images of what will be in the test:
I think that if I can reproduce something, recognise, or guess something, then I can pass, sort of (Sander, unaware).
The cue of recognition when studying leads Sander to believe that he can ‘sort of pass’ the test. Because of these poor images, in addition to low quality progression of learning judgments, this can only translate to poor and low quality control decisions. This penurious decision making is reflected in the large numbers of tests in his transcript which he needed to resit to achieve a pass grade (22 first passes and 15 resits).
However, we also encountered a moment in time when test preparation ceased without coming to a judgment of learning progression for the whole test. This was when students experienced effort saturation in learning. Instead of ‘enough learned for the test’, these students described a situation where they were ‘fed up with learning’. Although both will stop the learning process, sometimes only for a short time, the reason for this control decision is different from an high quality judgment:
I think mostly, when I … if I’ve been studying for a long time, at one point I tend to think: I’m done with it. Then I think: Okay, even though I may not know it all well enough, or I don’t know everything, I have the feeling I have to stop right now, because I can’t get anything more into my head. And then I stop studying (Anna, unaware).
Apparently, for some students there is no such thing as a moment when goals are met, and they cease test preparation. Some students carry on as long as they are able, sometimes until right before the exam. But for the majority of unaware students, the control decision is eventually made based on the feeling ‘fed up with learning’, e.g., effort saturation.
Aware
In contrast, aware students acquire cues and confirmation of learning in a different manner. Inquisitive students actively seek these cues (e.g., when they actively seek the company of specific peers), while others monitor by observing and deducing information from their surroundings. Cues, and with them judgments, can also arise in individual study sessions. Although Anton does not consider every single judgment, he has a mental image of what needs to be known. Cues are not only responsible for the subjective judgment of learning progression, but they can also be used to make control decisions. Anton is aware that he could say something about that specific subject when going over the different topics, but he is also aware that that his learning will suffice for the test. The accumulation and content saturation of cues form the basis for a control decision:
If I can see for myself globally what the subject is about, then I know that I’ve learned enough. So more of: I know what material I have to learn. And I don’t have an overview of that or anything like that, but for myself that I really think: if I go over everything, read it through once, then I do think: I can say something about everything I have to understand. Then I know that my learning will suffice. Stopping is a button you have to flip. I have no problem with that at all (Anton, aware).
High quality control decisions were almost always inferred from the accumulation of progression of learning judgments and appropriate value attributions. Content saturation, the point where additional studying does not contribute to what is regarded as “enough,” not only informs the student, but also forms the basis of the control decision to cease test preparation. The difference between aware and unaware students is mainly the quality of the judgments and the mental image, more specifically, the difference between “then I can pass, sort of” versus “then I know that my learning will suffice,” which is in line with what both groups show in their respective transcripts, for example in the number of resits. In aware students, a distinction can be made in attributing the appropriate value to cues, which is an important factor in making the control decision. Either way, for most students it is true that, given the limits on time and effort, studying is not as effective and efficient as it should or could be.
Discussion
In 1974, Miller and Parlett established that passing the test is important to students and that they adapt their learning whenever possible to achieve that goal with as little effort as possible. In this we see a major influence of the summative assessment system on student learning. This is still true today, even for innovative modes of assessments (Segers et al., 2001), although our results are also in line with research that states only a minority of students are sufficiently aware to be able to make this learning effective. Miller and Parlett (1974) referred to aware students as being cue-conscious and unaware students as cue-deaf. To accomplish this goal, students make judgments about learning during test preparation by comparing their current state of learning with their mental image about what is going to be in the test. Nelson and Narens (1990, 1994) also pointed out that the meta-level contains a representation of a mental image. We found, in line with Pieschl (2008), this mental image to be dynamic and to become more specific in time. This dynamic representation makes identifying learning gaps extremely dependent on perceptions about both what is going to be in the test (the mental image) and the quality of the monitoring process, both of which are of higher quality in aware students. Of course, this is nothing new, but our findings support and build on this knowledge.
The students in our sample used various cues to decide whether they had learned enough to cease test preparation. This use of cues as potential information vehicles therefore echoes the widespread support found for this in the metacognitive literature (Metcalfe et al., 1993; Koriat, 1997; Mitchum, 2007; Zohar and Barzilai, 2013; Efklides, 2014; Foster et al., 2016; De Bruin et al., 2017). Koriat’s cue-utilization approach (1997) has the advantage of specifying the informational basis of progression of learning judgments (Mitchum, 2007). However, it does not provide insights into the role of cues in making control decisions. Our study further supports the usability of the cue-utilization approach, by providing the insight that the self-efficacy of students can have an enormous influence toward making effective control decisions. Students with developed metacognitive skills, which we regarded as aware students, were cognisant that some cues contained important information about their learning progression. They perceived that those cues held an important value. If cues were consistent during learning, for instance during class, during evaluating with peers, or during rehearsal, this contributed to confident and high quality judgments of learning progressions. In contrast, students with low self-efficacy do not address the proper value, thereby sometimes underestimating their learning status, and will continue learning unabated, but without attributing to move beyond the pass grade. This influence of self-efficacy is seen in both aware and unaware students. Aware students seek cues, but do not always address a confirmational valuer to these cues, hence some continue learning. Unaware students do also encounter cues, albeit coincidental in nature, and some efficacious students will address confirmational values to these cues. However, this confirmation will not necessarily align with reality. Often when students do stop their learning, it is often not because of an high quality monitoring judgment, but because they are fed up. These different reasons for ceasing are in line with Efklides, who states that control decisions are influenced by various considerations. Our results show that with unaware students this is mainly through motivation and affect.
Our findings are in line with those of Foster et al. (2016),who suggests that the accuracy of most students’ predictions about their exam performance in the classroom did not improve over time. In addition, Van Overschelde (2008) found that, partly due to limited resources, not all cues are perceived as equal at a meta-level. This performance inaccuracy can be explained by our findings, as even when students are able to make high quality judgments, some perceive this as no accurate basis to make appropriate control decisions because of misaligned value attributions due to a lack of self-efficacy.
Research suggests that students with developed metacognitive skills are not necessarily those who attain high grades (Ablard and Lipschultz, 1998; Van der Linden et al., 2021). Study success can also be achieved by a student using metacognitive skills to achieve just a pass. Interventions to increase metacognitive skills have very limited effect on making test preparation more efficient with these students, because they already know if, when and how to make the appropriate control decisions. In this study, these students seemed not to be interested in further developing their metacognitive ability, precisely because of their tenacity to simply succeed. We also encountered students who would benefit from metacognitive training, but where the need for this was discounted by these students because they receive sufficient high grades. However, metacognitive judgments are often in conflict with objective measures of learning, like tests (Ghonsooly et al., 2014; Foster et al., 2016; Fritzsche et al., 2018; Van der Linden et al., 2021). Students with high self-efficacy seem to dare to attribute a confirmative value to the same cue that students with low self-efficacy dare not to, regardless of the grade.
Our work suggests that students’ self-efficacy strongly influences the effectiveness of test preparation. Hashempour et al. (2015) refer to this under or overestimation as metacognitive miscalibration. Bangert-Drowns et al. (1991) and Fritzsche et al. (2018) showed that the learners’ initial state, with self-efficacy as one characterisation, is a factor in monitoring accuracy and therefore also in making high quality control decisions. It is also in line with Roebers who states that a high quality decision is made when students are conscious and have confidence in their decisions (Roebers, 2002). For unaware students, metacognitive awareness is therefore a cognitive skill that can be taught (Ambrose et al., 2010). This also depends on the students’ self-efficacy, which can impact performance and is typically classified as a factor liable for individual differences (Hashempour et al., 2015; Wolters et al., 2017). On a positive note, metacognitive awareness is something that can be taught (Siero and van Oudenhoven, 1995; Meusen-Beekman et al., 2015; Leenknecht and Prins, 2018). De Bruin et al. (2016) found, in line with the better achievement findings of Butler (1998), that training students to apply a monitoring and regulation strategy positively influenced monitoring accuracy and test achievement, and that they have a better notion of their judgments. One could argue that it is the function of education not only to develop SRL strategies, but also to instil sufficient self-efficacy into students. Previous research has also shown that the development of self-efficacy is also possible (Sewell and St George, 2000). Unfortunately, although the development of SRL and self-efficacy is possible, a high number of students in HE lack these important skills. This implicates that there is still much to do within classrooms.
The use of the MSLQ in this study originated from the wish to alternate aware and unaware student so we could juxtapose them from the start. However, doing so turned out to be futile. The MSLQ does not measure awareness; it measures a number of factors related to self-regulated learning. It became apparent that SRL is, obviously, not synonymous for awareness. Therefore, juxtaposing aware and unaware students was not entirely on par with our intentions. We therefore released this aim and only established the awareness status afterwards qualitatively.
Collecting data through individual interviews allowed us to gain insights into the monitoring and control process. We are aware that this individual focus can have limitations because external influences may not have been captured, especially since students mentioned peers and teachers as an important source of cues. Nevertheless, a broad spectrum of different approaches to studying and cue use were encountered. Furthermore, the cognitive skills involved in monitoring and control were studied in the specific context of knowledge test preparation and in the domain of teacher education. Further research is needed to investigate whether these results can be generalized to other types of assessments and if results are also applicable to other domains. Metacognitive skills could be more present in future teachers as they are expected to teach such skills to their pupils, although there was no indication that this is the case. Also, it would be interesting to investigate the topic of this paper with the context of a specific test, since it would be possible to address the actual accuracy of the monitoring process.
Conclusion
We conclude that aware and unaware students differ when it comes to making high quality control decisions. As expected, the difference mainly unfolds in the quality of the progression of learning judgments and the accuracy of the mental image of the upcoming test. More specific was the difference in attitudes between “then I can pass, sort of” versus “then I know that my learning will suffice.” Notable in our study is that results show that students who attribute a confirmational value to the right cues are able to assess whether they have reached a content saturation point so that they can cease test preparation and devote time and effort to other goals. This attribution appears to be correlated with the self-efficacy of students, and therefore self-efficacy may play an important role in efficient test preparation. We therefore conclude that control decisions are influenced by a range of considerations, but especially by motivation and affect; self-efficacy. Our results indicate that attributing the correct value to cues is a separate metacognitive skill very much dependent on self-efficacy.
Data availability statement
The datasets presented in this article are not readily available because: permission to make the raw data available was not obtained. Requests to access the datasets should be directed to JL, j.vanderlinden@maastrichtuniversity.nl.
Ethics statement
The studies involving human participants were reviewed and approved by Ethical Research Committee of the HAN University of Applied Sciences, approval number ECO 283.06/21. The patients/participants provided their written informed consent to participate in this study.
Author contributions
JL, TS-M, LN, and CV contributed to conception and design of the study. JL and TS-M contributed to qualitative analysis and analyzed and coded qualitative data, which was discussed in iteration with the whole team until results became clear. JL, TS-M, and CV wrote sections of the manuscript. All authors contributed to the article and approved the submitted version.
Funding
Financial support from HAN University of Applied Sciences made this research possible.
Acknowledgments
We would like to thank the students who participated in this study. Additional thanks go to Marja van der Linden for her meticulous text revisions and to Annebeth de Jong for her help in transcribing and coding the interviews. We also gratefully acknowledge the financial support from HAN University of Applied Sciences, which made this research possible.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Ablard, K. E., and Lipschultz, R. E. (1998). Self-regulated learning in high-achieving students. J. Educ. Psychol. 90, 94–101. doi: 10.1037/0022-0663.90.1.94
Akturk, A. O., and Sahin, I. (2011). Literature review on metacognition and its measurement. Procedia Soc. Behav. Sci. 15, 3731–3736. doi: 10.1016/j.sbspro.2011.04.364
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., and Norman, M. K. (2010). How Learning Works (1st). Hoboken, NJ: Jossey-Bass.
Baartman, L., Baukema, H., and Prins, F. (2022). Exploring students' feedback seeking behavior in the context of programmatic assessment. Assess. Eval. High. Educ., 1–15. doi: 10.1080/02602938.2022.2100875
Bandura, A. (1997). Self-Efficacy: The Exercise of Control. New York: WH Freeman/Times Books/Henry Holt & Co.
Bangert-Drowns, R. L., Kulik, J. A., Kulik, C.-L. C., and Morgan, M. (1991). The instructional effect of feedback in test-like events. Rev. Educ. Res. 61, 213–238. doi: 10.3102/00346543061002213
Ben-Eliyahu, A., and Bernacki, M. L. (2015). Addressing complexities in self-regulated learning: a focus on contextual factors, contingencies, and dynamic relations. Metacogn. Learn. 10, 1–13. doi: 10.1007/s11409-015-9134-6
Bond, L. A. (1995). Norm-Referenced Testing and Criterion-Referenced Testing: The Differences in Purpose, Content, and Interpretation of Results. Oak Brook, IL: North Central Regional Educational Lab.
Broekkamp, H., and Van Hout-Wolters, B. H. A. M. (2007). Students' adaptation of study strategies when preparing for classroom tests. Educ. Psychol. Rev. 19, 401–428. doi: 10.1007/s10648-006-9025-0
Brooks, J., McCluskey, S., Turley, E., and King, N. (2014). The utility of template analysis in qualitative psychology research. Qual. Res. Psychol. 12, 202–222. doi: 10.1080/14780887.2014.955224
Brown, G. T. L. (2019). Is assessment for learning really assessment? [perspective]. Front. Educ. 4, 1–7. doi: 10.3389/feduc.2019.00064
Bruinsma, M. (2004). Motivation, cognitive processing and achievement in higher education. Learn. Instr. 14, 549–568. doi: 10.1016/j.learninstruc.2004.09.001
Butler, D. L. (1998). The strategic content learning approach to promoting self-regulated learning: a report of three studies. J. Educ. Psychol. 90, 682–697. doi: 10.1037//0022-0663.90.4.682
Butler, D. L., and Cartier, S. C. (2004). Promoting effective task interpretation as an important work habit: a key to successful teaching and learning. Teach. Coll. Rec. 106, 1729–1758. doi: 10.1111/j.1467-9620.2004.00403.x
Cazan, A.-M. (2012). Self regulated learning strategies - predictors of academic adjustment. Procedia Soc. Behav. Sci. 33, 104–108. doi: 10.1016/j.sbspro.2012.01.092
Cheung, K. K. C., and Tai, K. W. H. (2021). The use of intercoder reliability in qualitative interview data analysis in science education. Res. Sci. Technol. Educ. 1–21, 1–21. doi: 10.1080/02635143.2021.1993179
Chua, E. F., Schacter, D. L., and Sperling, R. A. (2009). Neural correlates of metamemory: a comparison of feeling-of-knowing and retrospective confidence judgments. J. Cogn. Neurosci. 21, 1751–1765. doi: 10.1162/jocn.2009.21123
Creswell, J. W. (2014). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research (4th). London: Pearson.
Cross, D. R., and Paris, S. G. (1988). Developmental and instructional analyses of Children's metacognition and Reading comprehension. J. Educ. Psychol. 80, 131–142. doi: 10.1037/0022-0663.80.2.131
De Bruin, A. B. H., Dunlosky, J., and Cavalcanti, R. B. (2017). Monitoring and regulation of learning in medical education: the need for predictive cues. Med. Educ. 51, 575–584. doi: 10.1111/medu.13267
De Bruin, A. B. H., Kok, E. M., Lobbestael, J., and De Grip, A. (2016). The impact of an online tool for monitoring and regulating learning at university: overconfidence, learning strategy, and personality. Metacogn. Learn. 12, 21–43. doi: 10.1007/s11409-016-9159-5
Dent, A. L., and Koenka, A. C. (2016). The relation between self-regulated learning and academic achievement across childhood and adolescence: a meta-analysis. Educ. Psychol. Rev. 28, 425–474. doi: 10.1007/s10648-015-9320-8
Dinsmore, D. L., and Parkinson, M. M. (2013). What are confidence judgments made of? Students' explanations for their confidence ratings and what that means for calibration. Learn. Instr. 24, 4–14. doi: 10.1016/j.learninstruc.2012.06.001
Duncan, T. G., and McKeachie, W. J. (2005). The making of the motivated strategies for learning questionnaire. Educ. Psychol. 40, 117–128. doi: 10.1207/s15326985ep4002_6
Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: the MASRL model. Educ. Psychol. 46, 6–25. doi: 10.1080/00461520.2011.538645
Efklides, A. (2012). Commentary: how readily can findings from basic cognitive psychology research be applied in the classroom? Learn. Instr. 22, 290–295. doi: 10.1016/j.learninstruc.2012.01.001
Efklides, A. (2014). How does metacognition contribute to the regulation of learning? An integrative approach. Psychol. Top. 23, 1–30.
Efklides, A., and Vauras, M. (1999). Introduction. Eur. J. Psychol. Educ. 14, 455–459. doi: 10.1007/BF03172972
Egan, B. (2015). What is a “good” Decision? How is Quality Judged? Expert Reference Series of White Papers. Global Knowledge Training LLC. Available at: https://d12vzecr6ihe4p.cloudfront.net/media/966040/wp-what-is-a-good-decision-how-is-quality-judged.pdf
Engelen, J. A. A., Camp, G., van de Pol, J., and de Bruin, A. B. H. (2018). Teachers’ monitoring of students’ text comprehension: can students’ keywords and summaries improve teachers’ judgment accuracy? Metacogn. Learn. 13, 287–307. doi: 10.1007/s11409-018-9187-4
Ericsson, K. A., Krampe, R. T., and Tesch-Römer, C. (1993). The role of deliberate practice in the Acquisition of Expert Performance. Psychol. Rev. 100, 363–406. doi: 10.1037/0033-295X.100.3.363
Foster, N. L., Was, C. A., Dunlosky, J., and Isaacson, R. M. (2016). Even after thirteen class exams, students are still overconfident: the role of memory for past exam performance in student predictions. Metacogn. Learn. 12, 1–19. doi: 10.1007/s11409-016-9158-6
Fritzsche, E., Händel, M., and Kröner, S. (2018). What do second-order judgments tell us about low-performing students’ metacognitive awareness? Metacogn. Learn. 13, 159–177. doi: 10.1007/s11409-018-9182-9
Garrett, J., Alman, M., Gardner, S., and Born, C. (2007). Assessing students' metacognitive skills. Am. J. Pharm. Educ. 71:14. doi: 10.5688/aj710114
Ghonsooly, B., Khajavy, G. H., and Mahjoobi, F. M. (2014). Self-efficacy and metacognition as predictors of Iranian teacher trainees’ academic performance: a path analysis approach. Procedia Soc. Behav. Sci. 98, 590–598. doi: 10.1016/j.sbspro.2014.03.455
Hashempour, M., Ghonsooly, B., and Ghanizadeh, A. (2015). A study of translation Students' self-regulation and metacognitive awareness in association with their gender and educational level. Int. J. Comp. Lit. Translat. Stud. 3, 60–69. doi: 10.7575/aiac.ijclts.v.3n.3p.60
Heikkilä, A., Lonka, K., Nieminen, J., and Niemivirta, M. (2012). Relations between teacher students’ approaches to learning, cognitive and attributional strategies, well-being, and study success. High. Educ. 64, 455–471. doi: 10.1007/s10734-012-9504-9
Henley, S. H. A. (1984). Unconscious perception re-revisited: a comment on Merikle’s (1982) paper. Bull. Psychon. Soc. 22, 121–124. doi: 10.3758/BF03333780
Hughes, A. (2017). Educational complexity and professional development: teachers’ need for metacognitive awareness. J. Technol. Educ. 29, 25–44. doi: 10.21061/jte.v29i1.a.2
Jessop, T., El Hakim, Y., and Gibbs, G. (2014). The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns. Assess. Eval. High. Educ. 39, 73–88. doi: 10.1080/02602938.2013.792108
Jessop, T., and Tomas, C. (2017). The implications of programme assessment patterns for student learning. Assess. Eval. High. Educ. 42, 990–999. doi: 10.1080/02602938.2016.1217501
Kelly-Laubscher, R. F., and Luckett, K. (2016). Differences in curriculum structure between high school and university biology: the implications for epistemological access. J. Biol. Educ. 50, 425–441. doi: 10.1080/00219266.2016.1138991
Kitsantas, A., Winsler, A., and Huie, F. (2008). Self-regulation and ability predictors of academic success during college: a predictive validity study. J. Adv. Acad. 20, 42–68. doi: 10.4219/jaa-2008-867
Koriat, A. (1997). Monitoring one's own knowledge during study: a cue-utilization approach to judgments of learning. J. Exp. Psychol. Gen. 126, 349–370. doi: 10.1037//0096-3445.126.4.349
Kyndt, E., Dochy, F., Struyven, K., and Cascallar, E. (2011). The perception of workload and task complexity and its influence on students' approaches to learning: a study in higher education. Eur. J. Psychol. Educ. 26, 393–415. doi: 10.1007/s10212-010-0053-2
Lai, E. R. (2011). Metacognition: A Literature Review. Always learning: Pearson research report. 24, 1–40. Available at: http://images.pearsonassessments.com/images/tmrs/Metacognition_Literature_Review_Final.pdf
LeCompte, M. D., and Goetz, J. P. (1982). Problems of reliability and validity in ethnographic research. Rev. Educ. Res. 52, 31–60. doi: 10.3102/00346543052001031
Lee, J. C.-K., Zhang, Z., and Yin, H. (2010). Using multidimensional Rasch analysis to validate the Chinese version of the motivated strategies for learning questionnaire (MSLQ-CV). Eur. J. Psychol. Educ. 25, 141–155. doi: 10.1007/s10212-009-0009-6
Leenknecht, M. J. M., and Prins, F. J. (2018). Formative peer assessment in primary school: the effects of involving pupils in setting assessment criteria on their appraisal and feedback style. Eur. J. Psychol. Educ. 33, 101–116. doi: 10.1007/s10212-017-0340-2
Markus, K. A., and Borsboom, D. (2013). Frontiers of Test Validity Theory: Measurement, Causation, and Meaning. London: Routledge/Taylor & Francis Group.
Merikle, P. M. (1984). Toward a definition of awareness. Bull. Psychon. Soc. 22, 449–450. doi: 10.3758/BF03333874
Metcalfe, J. (2009). Metacognitive judgments and control of study. Curr. Dir. Psychol. Sci. 18, 159–163. doi: 10.1111/j.1467-8721.2009.01628.x
Metcalfe, J., Schwartz, B. L., and Joaquim, S. G. (1993). The Cue-familiarity heuristic in metacognition. J. Exp. Psychol. Learn. Mem. Cogn. 19, 851–861. doi: 10.1037/0278-7393.19.4.851
Meusen-Beekman, K. D., Joosten-ten Brinke, D., and Boshuizen, H. P. A. (2015). Developing young adolescents' self-regulation by means of formative assessment: a theoretical perspective. Cogent Educ. 2, 1–16. doi: 10.1080/2331186X.2015.1071233
Miller, C. M., and Parlett, M. R. (1974). Up to the Mark (Vol. 21). London: Society for Research into Higher Education.
Mitchum, A. L. (2007). A Cue-Utilization approach to cognitive monitoring and performance: The effect of strategy differences on monitoring accuracy. Publication Number Dissertation/Thesis. Tallahassee, FL: Florida State University.
Nelson, T. O., and Narens, L. (1990). Metamemory: a theoretical framework and new findings. Psychol. Learn. Motiv. 26, 125–173. doi: 10.1016/S0079-7421(08)60053-5
Nelson, T. O., and Narens, L. (1994). “Why investigate metacognition?” in Metacognition: Knowing about knowing. eds. J. Metcalfe and A. P. Shimamura (Cambridge, MA: The MIT Press), 1–25.
Ohtani, K., and Hisasaka, T. (2018). Beyond intelligence: a meta-analytic review of the relationship among metacognition, intelligence, and academic performance. Metacogn. Learn. 13, 179–212. doi: 10.1007/s11409-018-9183-8
Panadero, E., Klug, J., and Järvelä, S. (2016). Third wave of measurement in the self-regulated learning field: when measurement and intervention come hand in hand. Scand. J. Educ. Res. 60, 723–735. doi: 10.1080/00313831.2015.1066436
Pieschl, S. (2008). Metacognitive calibration—an extended conceptualization and potential applications. Metacogn. Learn. 4, 3–31. doi: 10.1007/s11409-008-9030-4
Pintrich, P. R. (2000). “The role of goal orientation in self-regulated learning” in Handbook of self-regulation. eds. M. Boekaerts, P. R. Pintrich, and M. Zeidner (San Diego: Academic Press). 451–502.
Pintrich, P. R., Smith, D. A. F., Garcia, T., and McKeachie, W. J. (1991). A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ). Michigan: University of Michigan.
Reder, L. M., and Schunn, C. D. (1996). “Metacognition does not imply awareness: strategy choice is governed by implicit learning and memory” in Implicit Memory and Metacognition. ed. L. M. Reder (New York, NY: Psychology Press). 45–78.
Roebers, C. M. (2002). Confidence judgments in children's and adults' event recall and suggestibility. Dev. Psychol. 38, 1052–1067. doi: 10.1037//0012-1649.38.6.1052
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instr. Sci. 18, 119–144. doi: 10.1007/BF00117714
Schraw, G., and Dennison, R. S. (1994). Assessing metacognitive awareness. Contemp. Educ. Psychol. 19, 460–475. doi: 10.1006/ceps.1994.1033
Schuwirth, L., and van der Vleuten, C. (2011). Programmatic assessment: from assessment of learning to assessment for learning. Med. Teach. 33, 478–485. doi: 10.3109/0142159X.2011.565828
Segers, M., Dierick, S., and Dochy, F. (2001). Quality standards for new modes modes of assessment. An exploratory study of the consequential validity of the over all test. Eur. J. Psychol. Educ. 16, 569–588. doi: 10.1007/BF03173198
Sewell, A., and St George, S. (2000). Developing efficacy beliefs in the classroom. J. Educ. Enq. 1, 58–71.
Siero, F., and van Oudenhoven, J. P. (1995). The effects of contingent feedback on perceived control and performance. Eur. J. Psychol. Educ. 10, 13–24. doi: 10.1007/BF03172792
Tock, J., Tock, J., Moxley, J., and Moxley, J. (2017). A comprehensive reanalysis of the metacognitive self-regulation scale from the MSLQ. Metacogn. Learn. 12, 79–111. doi: 10.1007/s11409-016-9161-y
van de Pol, J., De Bruin, A. B. H., van Loon, M. H., and van Gog, T. (2019). Students’ and teachers’ monitoring and regulation of students’ text comprehension: effects of comprehension cue availability. Contemp. Educ. Psychol. 56, 236–249. doi: 10.1016/j.cedpsych.2019.02.001
Van der Linden, J., Van Schilt-Mol, T., Nieuwenhuis, L., and Van der Vleuten, C. P. M. (2021). Learning for a summative assessment: the relationship between students’ academic achievement and self-regulated learning. Open J. Soc. Sci. 9, 351–367. doi: 10.4236/jss.2021.910025
Van der Vleuten, C. P. M., Lindemann, I., and Schmidt, L. (2018). Programmatic assessment: the process, rationale and evidence for modern evaluation approaches in medical education. Med. J. Aust. 209, 386–388. doi: 10.5694/mja17.00926
Van der Vleuten, C. P. M., and Schuwirth, L. W. T. (2005). Assessing professional competence: from methods to programmes. Med. Educ. 39, 309–317. doi: 10.1111/j.1365-2929.2005.02094.x
Van Loon, M. (2014). Fostering monitoring and regulation of learning. Publication Number Dissertation/Thesis. Maastricht: Maastricht University.
Van Overschelde, J. P. (2008). “Metacognition: knowing about knowing,” in Handbook of Metamemory and Memory. eds. J. Dunlosky and R. A. Bjork (New York, NY: Psychology Press), 47–71.
Veenman, M. V. J. (2011). “Learning to self-monitor and self-regulate,” in Handbook of Research on Learning and Instruction. eds. R. E. Mayer and P. A. Alexander (London: Routledge), 197–218.
Virtanen, P., Nevgi, A., and Niemi, H. (2015). Self-regulation in higher education: students’ motivational, Regulational and learning strategies, and their relationships to study success. Stud. Learn. Soc. 3, 20–34. doi: 10.2478/sls-2013-0004
Watling, C. J., and Lingard, L. (2012). Grounded theory in medical education research: AMEE guide no. 70. Med. Teach. 34, 850–861. doi: 10.3109/0142159X.2012.704439
Wolters, C. A., Won, S., and Hussain, M. (2017). Examining the relations of time management and procrastination within a model of self-regulated learning. Metacogn. Learn. 12, 381–399. doi: 10.1007/s11409-017-9174-1
Young, A., and Fry, J. (2012). Metacognitive awareness and academic achievement in college students. J. Scholarsh. Teach. Learn. 8, 1–10.
Zimmerman, B. J. (2008). Investigating self-regulation and motivation: historical background, methodological developments, and future prospects. Am. Educ. Res. J. 45, 166–183. doi: 10.3102/0002831207312909
Keywords: cues, monitoring, control of learning, assessment, metacognition
Citation: van der Linden JB, van Schilt-Mol TMML, Nieuwenhuis AFM and van der Vleuten CPM (2023) Perceived control decisions in preparation for a summative achievement test in higher education. Front. Educ. 7:1043238. doi: 10.3389/feduc.2022.1043238
Edited by:
Robbert Smit, St. Gallen University of Teacher Education, SwitzerlandReviewed by:
Peter Verkoeijen, Erasmus University Rotterdam, NetherlandsMadeleine Rohlin, Malmö University, Sweden
Copyright © 2023 van der Linden, van Schilt-Mol, Nieuwenhuis and van der Vleuten. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: ✉ Jeroen van der Linden, J.vanderLinden@maastrichtuniversity.nl
†ORCID: Jeroen van der Linden https://orcid.org/0000-0001-5041-8972