
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Educ. , 24 February 2025
Sec. Higher Education
Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1419680
Introduction: Instructional quality refers to classroom management, personal learning support and cognitive activation.
Methods: This study was conducted in nurses’ vocational education and training (VET). We asked nursing students (N = 496) to estimate instructional quality of their classes (n = 52). Furthermore, we asked teachers (N = 52) to describe the learning objectives they aimed to achieve. We used online and paper- pencil questionnaires with validated scales to measure classroom management, personal learning support, and cognitive activation, and connectivity. For analyzing our data, we used descriptive statistics and Anova with Bonferroni and Turkey’ post hoc tests. We also used structuring qualitative content analysis to teachers’ descriptions of learning goals.
Results: Our results indicate that nursing students estimated the instructional quality of their classes quite high. When teacher aimed on achieving higher levels of learning objectives classroom management and personal learning support were estimated significantly different.
Discussion: In VET it is important to coordinate the content, the way of instruction, and the way learners are supported in the learning process to achieve high instructional quality.
Instruction can be described as “the process of educating, starting from the stage of preparation to examination of the learning experience from students as the latter part of the process” (Triyono, 2011, p. 130). It does not only involve presenting information to learners, but the selection of materials and teachers’ flexibility, adaptability, and the use of resources (e.g., technology) are important criteria that can determine meaningful learning for students (Jeet and Pant, 2023). Therefore, it can be assumed that “classroom learning is an active, cumulative, and social process” (Kunter and Voss, 2013, p. 97).
Studies have shown that “what teachers do in their classroom has a great impact on their students’ lives” (Blikstad-Balas et al., 2021, p. 9). The most prominent meta-analysis by Hattie (2009) indicates that teachers can make a difference in students’ achievements at school. The important factors that can influence students’ learning are included in the construct of instructional quality (Senden et al., 2022). Instructional quality can be understood as a multidimensional construct that combines the three variables of “cognitive activation, clarity of instruction, and supportive climate as regarded essential features” (Blömeke et al., 2016, p. 26).
Instructional quality has been extensively studied, with a primary focus on determining whether it is related to learners’ achievements (e.g., Blömeke et al., 2016; Kunter and Voss, 2013). Previous studies have also focused on the differentiation of instruction quality and its relation to different student groups (e.g., age or socioeconomic status) and different domains (Senden et al., 2022). Another research approach that can be found in studies on instruction quality focuses on the question of how instructional quality can be measured (e.g., Praetorius et al., 2012; Senden et al., 2022). In addition, the advantages and disadvantages of observations, self-estimation, and learners’ estimation have been discussed.
While certain subject areas (such as mathematics) have been extensively studied, less is known about instructional quality in vocational education and training (VET) (Warwas and Helm, 2018). VET aims to provide learners with theoretical knowledge, skills, and practical experiences that allow them to accomplish work tasks in a specific field. However, it also poses challenges for teachers and their professional development because the rapidly changing work environments require highly competent school leavers who do not have practice readiness (Masso et al., 2022). Therefore, instructional quality in VET relates to the competence of learners to accomplish job-specific tasks. Moreover, teachers in VET have to connect the theoretical knowledge necessary for a specific field and the practical experience that learners gain in their work with the goal to provide students the opportunity to be prepared for future challenges (Anselmann, 2023). Given the importance of VET in equipping students with practical skills, it is crucial to explore and understand the factors that influence instructional quality within the context. The aim of this study is to obtain insights into instructional quality in VET. The research questions are as follows:
1. How do nursing students estimate the quality of instruction in their VET?
2. Is there a difference between their estimation based on the duration of their VET, the learning objective of the lessons, and the estimation from their teacher?
Instructional quality attracts research interest for various reasons. Waxman and Padrón (2004) described that research on instructional quality can help obtain insights into instructional practices and in different levels of instructional quality for different groups of students. These results can be used to improve teachers’ instruction competencies, for instance, through improved teacher education programs.
Research on instructional quality has focused on determining the general aspects through which instructional quality can be defined independent of the context, content, and domain in which the instruction is given (Helmke and Helmke, 2017). It can be defined as a process of identifying normative criteria that allow someone to evaluate the instruction process itself (Helmke, 2022). It can also be defined by its output. This definition follows the educational effectiveness paradigm, which focuses on the effectiveness of the instruction. According to this definition, instruction is effective when learners show the desired development (Praetorius et al., 2020).
In our study, instructional quality refers “to all teacher-related characteristics that produce favorable educational outcomes” (Kunter and Voss, 2013) and can be understood as the “degree to which instruction is effective, efficient, and engaging” (Mu et al., 2022). Instructional quality develops as a “social practice that is co-constructed by students and teachers around content” (Christ et al., 2022, p.1). Results from various studies have indicated that teaching quality is positively related to students’ learning (Praetorius et al., 2018).
Teaching effectiveness is a prominent research topic. It can be understood in the way of the “process-mediation-product-paradigm” (Schlesinger and Jentsch, 2016, p. 30), which describes the relationship between the learning opportunities a teacher provides to the learners and the learners’ use of these opportunities, which can lead to a learning product. How these learning opportunities are provided is described by instructional quality. The idea is that when learning opportunities are provided in the best manner possible, the learners are more likely to use them for their learning (Schlesinger and Jentsch, 2016).
Scheerens and Bosker (1997) showed three dimensions of instruction that can be understood as instructional quality. Based on this, Klieme (2006) built a model that summarizes these domain-independent dimensions of instructional quality. The three basic dimensions are (1) classroom management, (2) personal learning support, and (3) cognitive activation.
Classroom management focuses on the way learners can learn in a supportive environment. To build this environment, teachers should have the ability “to deliver well-structured and organized instruction as well as the ability to demonstrate effective student behavior management” (Burić and Kim, 2020, p. 5). This includes the way teachers can effectively deal with interruptions through learners (Burić and Kim, 2020). “Effective classroom management is characterized by a structured and well-organized lesson with clear rules and routines” (Schlesinger and Jentsch, 2016, p. 30).
Personal learning support is based on the key aspects of the self-determination theory proposed by Deci and Ryan (1985). It refers to teachers’ “ability to demonstrate features of the teacher-student relationship” (Burić and Kim, 2020, p. 5) and involves giving learners “individual support provided by differentiation, the creation of a supportive learning climate with a good relationship between students and teacher” (Schlesinger and Jentsch, 2016, p. 31). This dimension includes using a positive error climate and providing constructive feedback (Burić and Kim, 2020). According to the literature, this dimension “enhances students’ well-being and learning motivation” (Burić and Kim, 2020, p. 5; Praetorius et al., 2018).
The third dimension of instructional quality, cognitive activation, leads to learners’ “high level of students’ thinking” (Schlesinger and Jentsch, 2016, p. 31). It can be described as teachers’ “ability to engage students in higher order thinking skill and challenging tasks, foster in-depth understanding of the content, and stimulate explorations of concepts, ideas, and prior knowledge” (Burić and Kim, 2020, p. 6). This includes providing learners the opportunity to co-construct knowledge (Schlesinger and Jentsch, 2016).
Research on instructional quality has revealed different problems in defining instructional quality. First, there has been considerable discussion on how instructional quality can be measured. Schlesinger and Jentsch (2016) found that there “is little or no consistency in the conceptualization and nomination of subject-specific aspects.” (p. 29). For this measurement, observational instruments are generally used, which leads to methodological issues, such as the choice of lessons that are observed (Schlesinger and Jentsch, 2016). Another challenge is the inclusion of different perspectives. Learners’ perspectives on instruction and their estimation of instruction quality are important sources to measure aspects that foster learning. Learners can be seen as experts in their individual learning styles and in estimating which instruction helps them learn most effectively. Furthermore, learners can compare the different instructions they experience. Instructions can be estimated by many learners, which can lead to the generation of a valid database (Göllner et al., 2016).
The goal of this study is to determine how nursing students estimate the quality of instruction experienced in their VET.
Nursing students’ VET in Germany is a combination of formal learning with a theoretical training in vocational colleges and a practical part in different nursing organizations (Beil-Hildebrand and Smith, 2022). In the theoretical aspect, they obtain all theoretical knowledge about nursing, medicine, and anatomy. All students work as nurses from the beginning of their VET, which gives them the opportunity to gain experience in nursing. An important aspect of instruction in VET is to achieve connectivity. Connectivity can be achieved using integrative learning processes that facilitate the transfer of theoretical knowledge to practice (Anselmann, 2023). By focusing on connectivity in instruction, the theory–practice gap can be reduced. Giving nursing students the opportunity to combine their practical experience with their theoretical knowledge can be a further criterion for instruction quality in VET.
Therefore, in this study, we operationalized the dimensions of instructional quality as follows. To measure classroom management, we focused on teachers’ leadership and their ability to give clear instructions. To measure personal learning support, we included variables of motivation, error climate, student orientation, fit, and the use of different teaching methods. We focused on cognitive activation by including variables of activation, consolidation, and competency orientation. Figure 1 shows the operationalization of the dimensions of instructional quality in our study.
Students’ estimation is an important criterion in measuring instructional quality (Scherer et al., 2016). Bellens et al. (2015) stated that instructional quality depends not only on teacher characteristics but also on students’ characteristics. This means that instructional quality can be estimated with “meaningful interindividual differences” by students (Talić et al., 2022, p. 101950). Therefore, it must be taken into account that instructional quality can have a “lesson-to-lesson variation” as well as a “within-student” variation (Talic et al., 2022, p.1). Lüdtke et al. (2009) explained that it is important “to describe individual differences in these perceptions” (Scherer et al., 2016, p. 2). Knowledge of students’ characteristics that can make a difference in their estimation can help teachers design learning environments that fit students and, through this, foster effectiveness (Talic et al., 2022). Because this study was conducted in the field of VET, we assumed that nursing students’ level of experience and knowledge, which can be measured by the year of their VET, can make a difference in their estimation.
Brophy and Good (1986) described instruction as the “orchestration of different behaviors adapted to various contexts.” A more recent description emphasized the dynamic character of classroom learning. It can be assumed that “the classroom is a highly interactional and situational system” (Talic et al., 2022). Therefore, we also assumed that the way an instruction has taken place, as planned or not planned, can influence students’ estimation of its quality. Furthermore, the learning objectives, as a characteristic of its content, could make a difference.
Bloom’s (1956) taxonomy of learning objectives is one of the most prominent hierarchies that can help teachers define learning objectives, which describe “the skills and abilities that they desire their learners to master and demonstrate” (Adams, 2015, p. 152). Bloom’s taxonomy describes different dimensions of learning with different levels of cognitive skills required. The learning dimensions in which higher levels of cognitive skills are required “lead to deeper learning and transfer of knowledge and skills to a greater variety of tasks and contexts” (Adams, 2015, p. 152).
The taxonomy shows six different levels beginning with knowledge, comprehension, application, analysis, and synthesis and ending with the level of evaluation. In the basic dimension of knowledge, learners can recall facts, while in the dimension of comprehension, learners aim at the “lowest level of understanding” (Razzouk and Razzouk, 2008, p. 49). Application can be described as “the use of abstractions in particular and concrete situations” (Razzouk and Razzouk, 2008, p. 49). Analysis focuses on determining the relations between facts and parts of knowledge. Synthesis and evaluation are the highest level of the taxonomy. In the dimension of synthesis, learners can bring “together all the elements and parts of the case study material to form a new whole” (Razzouk and Razzouk, 2008, p. 50). In the dimension of evaluation, learners can make “judgments about the value of the information in the case, or processes and methods” (Razzouk and Razzouk, 2008, p. 50). Research has shown that learners differ in terms of their critical thinking skills when attending courses that are designed on Bloom’s learning taxonomy from those who attend other courses (e.g., Gokhale, 1995). It can be assumed that courses with different learning objectives can be estimated in different ways by learners.
The goal of this study is to determine how nursing students estimate the quality of instruction in their VET. Therefore, in this study, both perspectives on classroom learning were integrated. We asked the students to estimate the quality of instruction and the teachers to indicate what learning goals they were focusing on.
We conducted a panel study using an online questionnaire to collect data from two vocational colleges, involving 10 different cohorts. We contacted several vocational colleges to participate in our study. The selected colleges were comparable in terms of class sizes and teaching staff. Nursing students in Germany typically complete a 3-year VET program. During their studies, they undergo both theoretical training in subjects such as wound care, hygiene, and dementia, as well as practical training in various care settings, including acute care, geriatric care, and care services. Throughout their VET nursing students work in these care organizations, where they are supervised by trained mentors.
In our study, 496 nursing students (N = 496) evaluated 52 different classes on experienced instruction. The participants were distributed as follows: 19.2% were in the first year of their VET, 48.0% were in the second year, and 32.9% were in the third year. On average, each class was evaluated by M = 9.5 students (SD = 4.8).
In addition to the student evaluation, the 52 teachers (N = 52) who taught these classes also provided ratings They were asked to indicate whether the class proceeded as planned, as well as to describe the topic and the primary learning goals of the class. After each class, the nursing students received a link to an online questionnaire, while teachers completed a paper-pencil version of a questionnaire. Each class was assigned to a unique code, which allowed us to match the students’ and the teachers’ assessment for the same session.
Participation in this study was voluntary for both students and teachers. At the beginning of the questionnaires, the participants were informed that the data collection was anonymous and that neither the researcher nor the teachers would be able to trace individual responses. Ethical approval for this study was obtained from the ethics committee of the University of Education of Schwäbisch Gmünd.
The online questionnaire for the nursing students contained validated scales that measured the characteristics of instructional quality. Since many criteria regarding the instructional quality had to be assessed by the students, we ensured that the questionnaire was not too time-consuming to complete. This was particularly important to allow students to fill out the questionnaire immediately after each lesson without losing too much time.
To measure classroom management, we used a scale on classroom management from Steinert et al. (2003), which focuses on clarity of instruction. An example item for the scale on classroom management is “There were a lot of disruption during the class.” Another example for the scale measuring clarity of instruction is “Our teacher proceeds in a logical order in the lesson.” To measure personal learning support, we used scales on motivation (Rakoczy et al., 2005) (example item: “Our teacher often makes the lessons exciting.”). Furthermore, we used a scale from Bürgermeister et al. (2011) that measures error climate in classrooms by using items such as “In class I had the feeling that my teacher thought that making mistakes wasn’t a bad thing.”
We used a scale that measures student orientation and fit (Ditton and Merz, 2013) with items such as “Our teacher sets more difficult tasks for the better students.” To measure cognitive activation, we used a scale from Steinert et al. (2003) that measures consolidation with items such as “When we practice, we often apply what we have learned to other things.” Furthermore, we measured activation with a scale developed by Klieme et al. (2001) with items such as “I think my teacher asks questions in class that I have to think about.” We also used a scale measuring competence orientation (Steffens et al., 2008) with items such as “Before the teacher starts the lesson, he/she makes the goals clear to us.” Connectivity was measured with a scale from Anselmann (2023) with items such as “In class I can make a connection between the theory covered in class and my practical experiences.” As a control variable, we measured satisfaction with the instruction with a scale developed by von Saldern and Littig (1996) (Item example: “I usually find the lessons interesting”).
The questionnaire for the teachers included one dichotomic item and an open question. First, we asked the teachers if they thought that the classes were held as planned, with the options of yes/no. Next, we asked the teachers to describe the most important learning goals they wanted to achieve with their nursing students; this was as an open-ended question.
To analyze the data collected from the nursing students using the online questionnaire, we used descriptive statistics. We analyzed Cronbach’s alpha and estimated the mean and standard deviation. Furthermore, we used correlation analysis and Anova. We used Bonferroni post hoc test to estimate differences in pairs (Johnson and Christensen, 2008). Tukey’s post hoc test was used to analyze differences between more than two groups.
To analyze the data collected from the teachers with the paper-pencil questionnaire, we used structuring qualitative content analysis (Mayring, 2014). The structuring content analysis represents a systematic deductive approach. In this study, a category-assignment method was used. For this purpose, Bloom’s learning taxonomy (Bloom, 1956) was defined and described at each level. Specific terms, such as “list” and “facts,” were clearly identified as belonging to a specific category level. This allowed the participants’ responses to be clearly assigned to the corresponding levels.
Each level of Bloom’s learning taxonomy received a numeric value (starting with 1 for knowledge as the lowest level of cognitive skills and ending with 6 for evaluation as the highest level of cognitive skills). By using these numeric values, we could integrate the answers into the statistical analysis. Table 1 shows the category system.
The results of the descriptive statistics indicated acceptable Cronbach’s alpha for all scales ranging from α = 0.86 to 0.90. Table 2 shows Cronbach’s alpha, means and standard deviation for all scales. Personal learning support (M = 3.52; SD = 0.64) was rated the lowest, while cognitive motivation (M = 3.98; SD = 0.68) was rated the highest.
We used a qualitative content analysis to obtain insights into the teachers’ data. In the questionnaire, the teachers described the main learning goal they planned to achieve in the class. Their answers were collected and assigned to six different taxonomy levels. Most learning goals (57.7%) could be assigned to the lowest taxonomy level. No class was planned to achieve the highest learning goal of evaluation. Table 3 shows quotes and their categorization.
We also asked the teachers to indicate if they thought that the classes were held as planned. The results indicated that 43 classes (82.7%) were held as planned, whereas nine classes (17.3%) were not.
In the last step, we analyzed whether the classes estimated by the learners at different years of their VET were estimated in different ways. The results of the Anova showed no significant differences for the dimensions of classroom management (F = 2.070; p = 0.127), personal learning support (F = 2.072; p = 0.127), and connectivity (F = 1.170; p = 0.311). However, there was a significant difference in the dimension of cognitive activation (F = 7.112; p < 0.001). We used the Bonferroni post hoc test to determine learners of which years differed in their estimation. The results indicated that learners in the first year (M = 3.60; SD = 0.82) and those in their third year (M = 3.99; SD = 0.67) differed significantly in terms of their estimation. Learners in the third year estimated cognitive activation to be higher than learners in their first year.
Furthermore, we determined whether the classes that were held in the way the teacher planned were estimated higher on the different dimensions of instructional quality. Our results indicated that classes were estimated to be different on the dimensions of classroom management (F = 2.608; p = 0.024), personal learning support (F = 3.465; p = 0.004), and connectivity (F = 2.591; p = 0.025). There were no significant results for cognitive activation (F = 2.157; p = 0.058). When classes were held as planned, classroom management was estimated with M = 3.99 (SD = 0.68), personal learning support with M = 3.53 (SD = 0.56), and connectivity with M = 3.76 (SD = 0.90). When classes were not held as planned, classroom management was estimated to be lower with M = 3.94 (SD = 0.70), personal learning support with M = 3.49 (SD = 0.56), and connectivity with M = 3.62 (SD = 0.81).
We also determined if the learners had different estimations of the classes that were held to achieve learning goals of higher levels (Table 4). Our results showed that there were no significant differences regarding cognitive activation (F = 2.157; p = 0.06) and connectivity (F = 2.591; p = 0.25). Significant differences were found for classroom management (F = 2.608; p = 0.02) and personal learning support (F = 3.465; p = 0.004). Tukey’s post hoc test showed that classroom management classes that had learning goals on the second (M = 4.23; SD = 0.58) and fourth levels (M = 3.69; SD = 0.54) differed significantly. Personal learning support classes with learning goals on level 1 (M = 3.51; SD = 0.66) differed significantly from those with learning goals on level 2 (M = 3.81; SD = 0.51).
The results of our study show that nursing students estimate the instructional quality of their classes quite high. In particular, classroom management and cognitive activation are rated quite high. Nursing students with a longer duration of their VET estimate cognitive activation to be higher than those with a lower duration of their VET.
In this study, we also included the perspectives of teachers. Many studies on instructional quality assume that classes are held as planned. However, in practice, in many classes, teachers have flexibility in adapting to frequently changing circumstances. This includes the fact that classes cannot be held as planned. In such cases, classroom management, personal learning support, and connectivity were estimated to be lower for the nursing students than for classes that were held as planned. We were also interested in determining whether the content of the learning objectives influenced nursing students’ estimation of instructional quality. Our results showed that, as compared to classes that focused on learning objectives on higher levels, for classes with learning objectives on a lower level, classroom management was estimated higher and personal learning support was estimated lower. These results show that when teacher achieve to gain learning objectives on lower levels, they are more involved in classroom management than in personal support.
This study provides insights in various determinants that could influence learners’ estimation of instructional quality. And our results are consistent with current research on instructional quality at other educational levels, such as results of studies focusing on secondary education, as well as in different disciplines like mathematics Wisniewski et al. (2020) showed that students’ perception of instructional quality is positively related with the teachers’ estimation. In addition, when the teachers thought that their classes were not held as planned, students rated them lower in terms of instructional quality. Furthermore, König et al. (2021) showed that classroom management is an important indicator for the estimation of instructional quality. Other studies, such as Seidel and Shavelson (2007), showed that students’ perception of instructional quality is more accurate than their teachers’ estimation. It can be assumed that students’ estimation of instructional quality is important and informative for their teachers (Wisniewski et al., 2020). Although nursing VET is a highly specialized field with a unique structure in Germany, our findings are still relevant for other areas of vocational education and training. International research on nursing education has shown that nursing educators “must find innovative teaching strategies to effectively prepare new graduates for entering the workforce” (Robinson and Dearmon, 2013; p. 203). Instructional quality, particularly the connectivity between theory and practice, is a key goal in VET that all educators aim to achieve. Integrative learning processes support learners in establishing this connection by combining theoretical, practical, self-regulative, and socio-cultural knowledge (Anselmann, 2023). To create learning environments that meet this requirement, teachers need appropriate didactic models and structured planning. This is also reflected in the results of this study, where learners rated connectivity higher when the teaching aligned with the teacher’s planning.
Results showed that nursing students of higher classes estimate cognitive activation higher. These results could be explained with theories on expertise such as Dreyfus and Dreyfus (1986) model of skill acquisition. When nursing students start their VET, they start as novices. With “more experience and with deliberate practice with graduated challenges and feedback to correct errors” (Basu, 2020; p.1) the novice can recognize rules and standards in his or her field. This development of mastery can also be seen as an increase of cognitive development. It can be assumed that students that reached a higher level of mastery are more likely challenged by higher order learning objectives such as synthesis or evaluation (Basu, 2020).
The limitations of our study concern the teacher assessment of instructional quality. We asked the teachers to indicate only two different (according to planning and learning objectives) characteristics of their instruction from what the students indicated. The potential bias inherent in self-reported data should be considered as a limitation, especially since teachers evaluated their own teaching performance. Another limitation is that only individual lessons were included in the study, which may not fully capture the broader teaching practices.
Therefore, future research should incorporate longitudinal studies to assess instructional quality over an extended period, providing insights into the long-term impact on students’ perceptions of their vocational education and training. Additionally, comparative studies across different VET fields would be valuable to identify both commonalities and differences in teaching practices and students’ assessments, offering a more comprehensive understanding of instructional quality in VET.
The implications of our results concern research on instructional quality in VET. While there is considerable research on instructional quality in other subject areas, VET seems to be missed out. Especially in this area, where theoretical knowledge is immediately applied in a real work context, it is necessary to gain more insights into how instructional quality is related, for instance, to later work results. Our results indicate, depending on the learning objective, how instructional quality is estimated by students. This leads to the assumption that in VET, it is necessary to adapt instruction to its content. Especially regarding teacher support, it is necessary to coordinate the content, the instruction, and the way students are supported. This should be done depending on the objectives that are desired in the instruction. Further implications concern teacher education. Instructional quality is significantly influenced by how teachers behave and plan their lessons. Therefore, it should be an integral part of their training to recognize and reflect on which methods and strategies help them achieve this. Furthermore, teachers are also involved in curriculum design, and through appropriate structures in vocational education and training (VET), they can ensure better connectivity of theory and practice.
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
The studies involving humans were approved by the Ethics Committee University of Education Schwäbisch Gmünd. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
VA: Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review and editing. JR: Writing – review and editing. LT: Writing – review and editing.
The authors declare that no financial support was received for the research, authorship, and/or publication of this article.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Adams, N. E. (2015). Bloom’s taxonomy of cognitive learning objectives. J. Med. Library Assoc. 103:152. doi: 10.3163/1536-5050.103.3.010
Anselmann, V. (2023). Connectivity between education and work in nursing education: Validation of an instrument. Teach. Learn. Nurs. 18, 299–303. doi: 10.1016/j.teln.2022.06.008
Basu, A. (2020). How to be an expert in practically anything using heuristics, Bloom’s taxonomy, Dreyfus model, and building rubrics for mastery: Case of epidemiology and mountain bike riding. Qeios doi: 10.32388/BTH202
Beil-Hildebrand, M. B., and Smith, H. B. (2022). Comparative analysis of advanced practice nursing: Contextual and historical influences in North American and German-Speaking European Countries. Policy Politics Nurs. Pract. 23, 162–174. doi: 10.1177/15271544221105032
Bellens, K., Van Damme, J., Van Den Noortgate, W., Went, H., and Nilsen, T. (2019). Instructional quality: Catalyst or pitfall in educational systems’ aim for high achievement and equity? An answer based on multilevel SEM analyses of TIMSS 2015 data in Flanders (Belgium), Germany, and Norway. Large Scale Assess Educ. 7:1. doi: 10.1186/s40536-019-0069-2
Blikstad-Balas, M., Tengberg, M., and Klette, K. (2021). “Why – and how – should we measure instructional quality?,” in Ways of Analyzing Teaching Quality: Potentials and Pitfalls, eds M. Blikstad Balas, K. Klette, and M. Tengberg (Oslo: Scandinavian University Press), 9–20. doi: 10.18261/9788215045054-2021-00
Blömeke, S., Olsen, R. V., and Suhl, U. (2016). Relation of student achievement to the quality of their teachers and instructional quality. Teach. Q. Instruct. Q. Stud. Outcomes 2, 21–50.
Bloom, B. S. (1956). Taxonomy of Educational Objectives Handbook 1: Cognitive Domain. Philadelphia: David McKay Company, Inc.
Brophy, J., and Good, T. L. (1986). “Teacher behavior and student achievement,” in Handbook of Research on Teaching, 3rd Edn, ed. M. C. Wittrock (New York: Macmillan), 328–375.
Bürgermeister, A., Kampa, M., Rakoczy, K., Harks, B., Besser, M., Klieme, E., et al. (2011). Dokumentation der Befragungsinstrumente des Laborexperimentes im Projekt “Conditions and Consequences of Classroom Assessment (Co2CA). [Documentation of the Survey Instruments of the Laboratory Experiment in the Project ‘Conditions and Consequences of Classroom Assessment’ (Co2CA)]. Frankfurt am Main: DIPF.
Burić, I., and Kim, L. E. (2020). Teacher self-efficacy, instructional quality, and student motivational beliefs: An analysis using multilevel structural equation modeling. Learn. Instruct. 66:101302. doi: 10.1016/j.learninstruc.2019.101302
Christ, A. A., Capon-Sieber, V., Grob, U., and Praetorius, A. K. (2022). Learning processes and their mediating role between teaching quality and student achievement: A systematic review. Stud. Educ. Eval. 75:101209. doi: 10.1016/j.stueduc.2022.101209
Deci, E. L., and Ryan, R. M. (1985). The general causality orientations scale: Self-determination in personality. J. Res. Pers. 19, 109–134. doi: 10.1016/0092-6566(85)90023-6
Ditton, H., and Merz, D. (2013). QuaSSU – QualitätsSicherung in Schule und Unterricht – Erhebungszeitpunkt 1 (Skalenkollektion) [QuaSSU – Quality Assurance in School and Teaching – Survey Time Point 1 (Scale Collection)]. Version 1.0. Frankfurt am Main: DIPF | Leibniz Institute for Research and Information in Education.
Gokhale, A. A. (1995). Collaborative learning enhances critical thinking. J. Technol. Educ. 7, 22–30. doi: 10.1007/978-1-4419-1428-6_910
Göllner, R., Wagner, W., Klieme, E., Lüdtke, O., Nagengast, B., and Trautwein, U. (2016). “Erfassung der Unterrichtsqualität mithilfe von Schülerurteilen: Chancen, Grenzen und Forschungsperspektiven [Capturing teaching quality with students’ ratings: Chances and perspectives],” in Forschungsvorhaben in Ankopplung an Large-Scale-Assessments (S. 63–82), ed. Bundesministerium für Bildung und Forschung (Berlin: Bundesministerium für Bildung und Forschung). doi: 10.25656/01:12674
Hattie, J. (2009). The black box of tertiary assessment: An impending revolution. Tertiary Assess. High. Educ. Stud. Outcomes Policy Pract. Res. 259:275.
Helmke, A. (2022). Unterrichtsqualität und Professionalisierung: Diagnostik von Lehr-Lern-Prozessen und evidenzbasierte Unterrichtsentwicklung [Instruction quality and professionalization: Diagnostics of teaching-learning processes and evidence-based instructional development]. New York: KALLMEYER.
Helmke, A., and Helmke, T. (2017). “Unterrichtsdiagnostik als Ausgangspunkt für Unterrichtsentwicklung [Instructional diagnostics as a starting point for instructional development],” in, Potenzialentwicklung. Begabungsförderung. Bildung der Vielfalt: Beiträge aus der Begabungsforschung (S. 69–84), eds. C. Fischer, C. Fischer-Ontrup, F. Käpnick, F. J. Mönks, N. Neuber, & C. Solzbacher (Münster: Waxmann Verlag GmbH). doi: 10.25656/01:12674
Jeet, G., and Pant, S. (2023). Creating joyful experiences for enhancing meaningful learning and integrating 21st century skills. Int. J. Current Sci. Res. Rev. 6, 900–903. doi: 10.47191/ijcsrr/V6-i2-05
Johnson, B., and Christensen, L. B. (2008). Educational Research: Quantitative, Qualitative, and Mixed Approaches. Vereinigtes Königreich: SAGE Publications.
Klieme, E. (2006). Empirische Unterrichtsforschung: Aktuelle Entwicklungen und Befunde [Empirical research on teaching: Current developments and findings]. Zeitschrift für Pädagogik 52, 765–790.
Klieme, E., Schümer, G., and Knoll, S. (2001). “Mathematikunterricht in der Sekundarstufe I: Aufgabenkultur und Unterrichtsgestaltung [Teaching in Mathematics in secondary level 1: Task culture and instructional design],” in TIMSS - Impulse für Schule und Unterricht (S. 123–144), ed. Bundesministerium für Bildung und Forschung (München: Medienhaus Biering).
König, J., Blömeke, S., Jentsch, A., Schlesinger, L., Nehls, C. F., Musekamp, F., et al. (2021). The links between pedagogical competence, instructional quality, and mathematics achievement in the lower secondary classroom. Educ. Stud. Math. 107, 189–212. doi: 10.1007/s10649-020-10021-0
Kunter, M., and Voss, T. (2013). The model of instructional quality in COACTIV: A multicriteria analysis. J. Educ. Psychol. 105:805.
Lüdtke, O., Robitzsch, A., Trautwein, U., and Kunter, M. (2009). Assessing the impact of learning environments: How to use student ratings of classroom or school characteristics in multilevel modeling. Contemp. Educ. Psychol. 34, 77–88. doi: 10.1016/j.cedpsych.2008.12.001
Masso, M., Sim, J., Halcomb, E., and Thompson, C. (2022). Practice readiness of new graduate nurses and factors influencing practice readiness: A scoping review of reviews. Int. J. Nurs. Stud. 129:104208. doi: 10.1016/j.ijnurstu.2022.104208
Mayring, P. (2014). Qualitative Inhaltsanalyse: Grundlagen und Techniken [Qualitative content analysis: Theoretical foundation, basic procedures and software solution] (12. Aufl.). Weinheim: Beltz Verlag.
Mu, J., Bayrak, A., and Ufer, S. (2022). Conceptualizing and measuring instructional quality in mathematics education: A systematic literature review. Front. Educ. 7:994739. doi: 10.3389/feduc.2022.994739
Praetorius, A. K., Herrmann, C., Gerlach, E., Zülsdorf-Kersting, M., Heinitz, B., and Nehring, A. (2020). Unterrichtsqualität in den Fachdidaktiken im deutschsprachigen Raum–zwischen Generik und Fachspezifik [Instructional quality in subject-specific didactics in german-speaking countries – between generic and subject-specific aspects]. Unterrichtswissenschaft Zeitschrift Lernforschung 48, 409–446.
Praetorius, A.-K., Klieme, E., Herbert, B., and Pinger, P. (2018). Generic dimensions of teaching quality: The German framework of Three Basic Dimensions. ZDM - Math. Educ. 50, 407–426.
Praetorius, A.-K., Lenske, G., and Helmke, A. (2012). Observer ratings of instructional quality: Do they fulfill what they promise? Learn. Instruction 22, 387–400. doi: 10.1016/j.learninstruc.2012.03.002
Rakoczy, K., Buff, A., and Lipowsky, F. (2005). Dokumentation der Erhebungs-und Auswertungsinstrumente zur schweizerisch-deutschen Videostudie[Documentation of survey and evaluation instruments for the Swiss-German video study] Unterrichtsqualität, Lernverhalten und mathematisches Verständnis. 1. Befragungsinstrumente. Frankfurt: GFPF.
Razzouk, N. Y., and Razzouk, J. N. (2008). Analysis in teaching with cases: A revisit to blooms taxonomy of learning objectives. Coll. Teach. Methods Styles J. 4, 49–56. doi: 10.19030/ctms.v4i1.5049
Robinson, B. K., and Dearmon, V. (2013). Evidence-based nursing education: Effective use of instructional design and simulated learning environments to enhance knowledge transfer in undergraduate nursing students. J. Professional Nurs. 29, 203–209.
Scheerens, J., and Bosker, R. J. (1997). The Foundations of Educational Effectiveness. Oxford: Elsevier Science Ltd.
Scherer, R., Nilsen, T., and Jansen, M. (2016). Evaluating individual students’ perceptions of instructional quality: An investigation of their factor structure, measurement invariance, and relations to educational outcomes. Front. Psychol. 7:110. doi: 10.3389/fpsyg.2016.00110
Schlesinger, L., and Jentsch, A. (2016). Theoretical and methodological challenges in measuring instructional quality in mathematics education using classroom observations. ZDM 48, 29–40. doi: 10.1007/s11858-016-0765-0
Seidel, T., and Shavelson, R. J. (2007). Teaching effectiveness research in the past decade: The role of theory and research design in disentangling meta-analysis results. Rev. Educ. Res. 77, 454–499. doi: 10.3102/0034654307310317
Senden, B., Nilsen, T., and Blömeke, S. (2022). “Instructional quality: A review of conceptualizations, measurement approaches, and research findings,” in Ways of Analyzing Teaching Quality: Potentials and Pitfalls, eds M. Blikstad-Balas, K. Klette, and M. Tengberg (Oslo: Scandinavian University Press), 140–172. doi: 10.18261/9788215045054-2021-05
Steffens, U., Benisch, E., Brömer, B., Diel, E., Höfer, D., Knab, J., et al. (2008). Hessischer Referenzrahmen Schulqualität: Qualitätsbereiche, Qualitätsdimensionen und Qualitätskriterien [Hessian Reference Framework for School Quality: Quality Areas, Quality Dimensions, and Quality Criteria]. Wiesbaden, Germany: Institut für Qualitätsentwicklung.
Steinert, B., Gerecht, M., Klieme, E., and Döbrich, P. (2003). Skalen zur Schulqualität: Dokumentation der Erhebungsinstrumente. Arbeitsplatzuntersuchung (APU), Pädagogische Entwicklungsbilanzen (PEB).[Scales for school quality: Documentation of survey instruments. Frankfurt: GFPF.
Talić, I., Scherer, R., Marsh, H. W., Greiff, S., Möller, J., and Niepel, C. (2022). Uncovering everyday dynamics in students’ perceptions of instructional quality with experience sampling. Learn. Instruct. 81:101594. doi: 10.1016/j.learninstruc.2022.101594
Triyono, M. B. (2011). “Enhancing the learning quality through implementing instructional design for vocational education,” in Proceedings of the International Conference on Vocational Education and Training, 129–134.
von Saldern, M., and Littig, K. E. (1996). Landauer Skalen zum Sozialklima: 4.-13. Klassen; Beiheft mit Anleitung und Normentabellen [Landauer scales for social climate: Grades 4–13; Supplement with instructions and norm tables]. LASSO 4–13. Beltz: Weinheim.
Warwas, J., and Helm, C. (2018). Professional learning communities among vocational school teachers: Profiles and relations with instructional quality. Teach. Teach. Educ. 73, 43–55. doi: 10.1016/j.tate.2018.03.012
Waxman, H. C., and Padrón, Y. N. (2004). “The uses of the Classroom Observation Schedule to improve classroom instruction,” in Observational research in U.S. classrooms: New approaches for understanding cultural and linguistic diversity (S. 72–96), eds H. C. Waxman, R. G. Tharp, & R. S. Hilberg (Cambridge University Press).
Keywords: instructional quality, vocational education and training, nursing education, students’ estimation of instructional quality, connectivity
Citation: Anselmann V, Roggenstein J and Thudium L (2025) Instructional quality in nursing vocational education and training. Front. Educ. 10:1419680. doi: 10.3389/feduc.2025.1419680
Received: 03 May 2024; Accepted: 06 February 2025;
Published: 24 February 2025.
Edited by:
David Rodriguez-Gomez, Universitat Autònoma de Barcelona, SpainReviewed by:
Shailesh Tripathi, Rajendra Institute of Medical Sciences, IndiaCopyright © 2025 Anselmann, Roggenstein and Thudium. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Veronika Anselmann, dmVyb25pa2EuYW5zZWxtYW5uQHBoLWdtdWVuZC5kZQ==
†ORCID: Veronika Anselmann, orcid.org/0000-0001-5246-442X
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.