The present field study compared open-book testing and closed-book testing in two (parallel) introductory university courses in cognitive psychology. The critical manipulation concerned seven lessons. In these lessons, all students received two to three questions concerning the content of the respective lesson. Half the participants (open-book group) were allowed to use their notes and the course materials, which had been distributed at the beginning of each class; the other half was not allowed to use these materials (closed-book group). A surprise test conducted in the eighth week demonstrated better results for the closed-book group. Further 6 weeks later, the final module exam took place. A number of questions in this exam concerned the learning matters instructed during the critical seven lessons. Even with respect to these questions, the closed-book group performed better than the open-book group. We discuss these results with respect to two possible explanations, retrieval practice and motivational differences.
In order to ensure long-term retention of information students must move from relying on surface-level approaches that are seemingly effective in the short-term to “building in” so called “desirable difficulties,” with the aim of achieving understanding and long-term retention of the subject matter. But how can this level of self-regulation be achieved by students when learning? Traditionally, research on learning strategy use is performed using self-report questionnaires. As this method is accompanied by several drawbacks, we chose a qualitative, in-depth approach to inquire about students' strategies and to investigate how students successfully self-regulate their learning. In order to paint a picture of effective learning strategy use, focus groups were organized in which previously identified, effectively self-regulating students (N = 26) were asked to explain how they approach their learning. Using a constructivist grounded theory methodology, a model was constructed describing how effective strategy users manage their learning. In this model, students are driven by a personal learning goal, adopting a predominantly qualitative, or quantitative approach to learning. While learning, students are continually engaged in active processing and self-monitoring. This process is guided by a constant balancing between adhering to established study habits, while maintaining a sufficient degree of flexibility to adapt to changes in the learning environment, assessment demands, and time limitations. Indeed, students reported using several strategies, some of which are traditionally regarded as “ineffective” (highlighting, rereading etc.). However, they used them in a way that fit their learning situation. Implications are discussed for the incorporation of desirable difficulties in higher education.
Review of learned material is crucial for the learning process. One approach that promises to increase the effectiveness of reviewing during learning is to answer questions about the learning content rather than restudying the material (testing effect). This effect is well established in lab experiments. However, existing research in educational contexts has often combined testing with additional didactical measures that hampers the interpretation of testing effects. We aimed to examine the testing effect in its pure form by implementing a minimal intervention design in a university lecture (N = 92). The last 10 min of each lecture session were used for reviewing the lecture content by either answering short-answer questions, multiple-choice questions, or reading summarizing statements about core lecture content. Three unannounced criterial tests measured the retention of learning content at different times (1, 12, and 23 weeks after the last lecture). A positive testing effect emerged for short-answer questions that targeted information that participants could retrieve from memory. This effect was independent of the time of test. The results indicated no testing effect for multiple-choice testing. These results suggest that short-answer testing but not multiple-choice testing may benefit learning in higher education contexts.
According to the concept of desirable difficulties, introducing difficulties in learning may sacrifice short-term performance in order to benefit long-term retention of learning. We describe three types of desirable difficulty effects: testing, generation, and varied conditions of practice. The empirical literature indicates that desirable difficulty effects are not always obtained and we suggest that cognitive load theory may be used to explain many of these contradictory results. Many failures to obtain desirable difficulty effects may occur under conditions where working memory is already stressed due to the use of high element interactivity information. Under such conditions, the introduction of additional difficulties may be undesirable rather than desirable. Empirical evidence from diverse experiments is used to support this hypothesis.