Skip to main content

ORIGINAL RESEARCH article

Front. Educ., 10 November 2022
Sec. Assessment, Testing and Applied Measurement

Reduction in final year medical students’ knowledge during the COVID-19 pandemic: Insights from an interinstitutional progress test

Pedro Tadao Hamamoto Filho
&#x;Pedro Tadao Hamamoto Filho1*Dario Cecilio-Fernandes&#x;Dario Cecilio-Fernandes2Luiz Fernando Norcia&#x;Luiz Fernando Norcia1John Sandars&#x;John Sandars3M. Brownell Anderson&#x;M. Brownell Anderson4Anglica Maria Bicudo&#x;Angélica Maria Bicudo2
  • 1Botucatu Medical School, Universidade Estadual Paulista (UNESP), Botucatu, Brazil
  • 2School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil
  • 3Medical School, Edge Hill University, Ormskrik, United Kingdom
  • 4Escola de Medicina, University of Minho, Minho, Portugal

There has been little information about how the COVID-19 pandemic has impacted medical students’ knowledge acquisition. The aim of the study was to identify the impact of the COVID-19 pandemic on medical students’ knowledge acquisition by comparing the students’ performance on two Progress Test exams administered in 2019 (pre-pandemic) and 2020 (during the pandemic). We included data from 1,491 students at two medical schools in Brazil. Both schools had experienced interrupted preclinical classes and clinical clerkship rotations in March 2020 but had resumed remote preclinical classes with online activities within 1  month after the interruption and clerkship rotations within five to 6 months after the interruption. We analyzed the data with the Rasch model from Item Response Theory to calibrate the difficulty of the two exams and calculated the performance of the students, with comparison of the differences of mean knowledge for each year and between the two cohorts. We found that the students’ knowledge in the cohort of 2019 was higher than those in the cohort of 2020, except in the second year. Also, the students did not show any increase in knowledge between 2019 and 2020 in the clerkship years. It appears that the pandemic significantly impaired the knowledge acquisition of medical students, mainly in the clerkship years, where practical activities are the central part of training. This is of special concern in low- and middle-income countries where graduated medical doctors are allowed to practice without further training or are required to have continuing professional development.

Introduction

The COVID-19 pandemic dramatically changed how medical schools provided teaching, especially for clinical training. Early in the pandemic, some schools stopped all preclinical practical activities and transitioned to online teaching, other schools maintained the clinical clerkship rotations provided that safety measures were incorporated into clinical practice, and other schools interrupted all preclinical and clinical teaching activities until the situation was clarified (Ahmed et al., 2020; Rose, 2020; Theoret and Ming, 2020). Consequently, there has been a major concern regarding the impact of these various approaches on medical students’ learning, and ultimately the impact on doctors’ future professional competence due to their restricted clinical learning experiences (Lucey and Johnston, 2020). At the same time, medical schools faced the challenge of assessing students’ learning, especially with the application of technology for which most low- and middle-income countries had little experience (Alsoufi et al., 2020; Cecilio-Fernandes et al., 2020; Dost et al., 2020; Bączek et al., 2021).

Despite the major changes in medical school teaching during the pandemic, there has been little information about the impact of the transition from face to face to online activities and the suspension or reduction of clinical training on students’ knowledge acquisition. Some high-stake tests, such as the United States Medical Licensing Examination Step 2 Clinical Skills examination (Hammoud et al., 2020), that could have provided important information were canceled and there have been concerns about the knowledge and clinical competence of graduating students, including their readiness for practice (Lazarus et al., 2021). There have been several studies about assessment, both knowledge and clinical skills, during the pandemic but the focus has been limited to a specific discipline (Daniel et al., 2021). For example, one study with a small sample of students, focused on surgery and identified a decrease in the National Board of Medical Examiners’ examination scores, but this difference was not significant (Kronenfeld et al., 2020).

Progress Tests are frequently used in medical education to provide repeated assessments that can longitudinally measure students’ knowledge acquisition over time (Schuwirth and van der Vleuten, 2012). An important feature of Progress Tests is that they contain questions that focus on graduate level knowledge and also cover a broad range of medical knowledge domains (Schuwirth and van der Vleuten, 2012). Progress Tests offer the opportunity for comparisons both within and across schools, and also for monitoring curriculum changes within medical schools, since Progress Tests are designed to assess the final knowledge irrespective of the curriculum design (Muijtjens et al., 2008; Schmidmaier et al., 2010; Schuwirth and van der Vleuten, 2012). Progress Tests also provide information about the longitudinal progress of an individual student or a cohort of students over the duration of the course. Importantly, a lack of improvement in students’ scores may suggest inadequate academic training (Given et al., 2016).

The aim of the study was to identify the impact of the COVID-19 pandemic on medical students’ knowledge acquisition by comparing the students’ performance on two Progress Test exams administered in 2019 (pre-pandemic) and 2020 (during the pandemic). To our knowledge, there have been no previous similar studies.

Materials and methods

Settings

The study was conducted at two public medical schools, Universidade Estadual Paulista (UNESP) and Universidade Estadual de Campinas (UNICAMP), which are situated in São Paulo State, Brazil. Both medical schools have similar 6-year curriculums: preclinical, with basic sciences in the 1st and 2nd years and applied sciences in the 3rd and 4th years, and clinical with clerkship rotations in the 5th and 6th years. Clerkship rotations are organized in five areas: internal medicine, pediatrics, surgery, obstetrics and gynecology, and public health (even though the distribution of time is not the same across the areas). Clinical training is provided at primary care centers, general hospitals, clinic hospitals (with specialized wards and outpatient clinics), and emergency units.

Both schools had to interrupt preclinical classes and clinical clerkship rotations in March 2020 because of the COVID-19 pandemic. Remote teaching for preclinical years started with online activities within 1 month after the interruption. Clerkship rotations resumed between August and September 2020, respecting the safety recommendations for social distancing and personal protective equipment. Face to face clinical training opportunities were significantly limited during the pandemic, as the volume of hospitalized patients with non-COVID-19 diagnoses sharply decreased and outpatient clinic activities were reduced or stopped. In addition, grand rounds for case-based discussions were changed to online methods. At UNESP, face to face preclinical teaching for the 1st to 4th years resumed only in 2021. At UNICAMP, face to face preclinical teaching for the 3rd and 4th years resumed between September and October 2020, whereas teaching for students in the first 2 years resumed in 2021.

Progress test

Since 2005, both medical schools had been using a Progress Test for all preclinical and clinical students to provide an annual formative assessment, with the students’ performance not being considered in progression decisions. The Progress Test is developed and administered once a year by a consortium that includes eight other public medical schools (Bicudo et al., 2019). The consortium continually revises the annual Progress Test with new items that are aligned to a fixed blueprint covering six content areas: basic sciences, internal medicine, pediatrics, surgery, obstetrics and gynecology, and public health (20 items per area, with a total of 120 items). All of the items are multiple choice questions with four options and a single correct answer; the items are clinical vignette-based with the intention to assess applied knowledge rather than knowledge recall (Hamamoto Filho et al., 2020). Differently from 2019, the 2020 cohort of students took a computerized progress test, with a safe exam browser instead of the previous paper-based question book.

Design

Data from 1,491 students (639 from UNESP and 852 from UNICAMP) were eligible to be included. Students who did not take the Progress Test were excluded (81 in 2019, and 127 in 2020). The Progress Test assessment data from both UNESP and UNICAMP were compared between the cohorts of the calendar years of 2019 and 2020 at both UNESP and UNICAMP (Figure 1).

FIGURE 1
www.frontiersin.org

Figure 1. Comparisons performed. Black arrows: point comparisons between 2019 and 2020 for each academic year. Grey arrows: comparisons of the same cohort on subsequent academic years.

Regarding the exams, two items from the 2020 exam were deleted from the analysis due to errors in formatting the questions.

Data analysis

We used Item Response Theory with the Rasch model to analyse the data (Linacre, 1994; De Champlain, 2010). The Rasch model requires two assumptions: unidimensionality and local independence. Unidimensionality refers to the property of a test to measure only one construct (in this case, medical knowledge). Local independence analyses to what extent the items are related to each other (Downing, 2003; Baghaei, 2008; Christensen et al., 2017).

We divided the analysis in three phases: in the first phase, we investigated whether the data would meet two Rasch’s assumptions; in the second phase, we performed a linking and equating procedure in order to make two different tests with different difficulties comparable to each other (calibration of the exams); in the third phase, we calculated the performance of the students and compared the differences of mean knowledge for each year and between the two cohorts (2019 and 2020).

First phase: Preliminary analysis

Unidimensionality was tested using principal-component analysis of residuals and a fit-only approach (Tennant and Pallant, 2006). The optimal infit and outfit values are 1.00, ranging from 0.50 to 1.50 (Wright and Linacre, 1994; Bond and Fox, 2001). When the value is higher than 2.00, then the item is considered a threat to validity (Wright and Linacre, 1994) and exclusion is recommended. We also calculated the root of the square mean, with a value below 0.05 indicating a unidimensional instrument (Schilling, 2007). Compliance to the local independence assumption was verified according to the correlation coefficients between item residuals. A correlation lower than 0.7 between items means that the assumption of the local independence holds (Downing, 2003; Baghaei, 2008; Christensen et al., 2017).

Second phase: Linking and equating

For this procedure, we included 188 students from each medical school who had sat both the fifth (2019) and sixth year (2020) Progress Tests. After checking whether these Progress Tests were fit for linking and equating, we calculated the slope of the empirical line of both tests. The optimal value of the slope should be around 1.00. The value of the slope of the empirical line, without linking and equating, of the 2019 and 2020 tests were 0.61 and a correlation of 0.36.

For the linking and equating process, we conducted concurrent equating (or one-step) in which we calibrated all items at once with the common persons (students). We followed the process proposed by Yu and Osborn-Popp (2005) in plotting the two sets of thetas with the 95% confidence band based on the standard errors. This provides a way of evaluating the extent to which the two Progress Tests are measuring the same construct within a reasonable degree of measurement error. Although there were some outliers, we decided to maintain them for the linking and equating since the slope and correlation indicated the procedure was successful. After this procedure, we calculated the new value of the slope of the empirical line.

Third phase: Cohort comparison

For the data analysis, we used the Rasch model to compare the students’ scores on their Progress Tests. The scores are in a logarithmic scale ranging from negative to positive values. The higher the value of the score, the higher the students’ knowledge.

We compared students’ scores from 2019 and 2020 with a two-way ANOVA followed by a post hoc analysis with an independent t-test to verify differences between the same years of graduation (2020) in the two different cohorts.

Data were analyzed using Winsteps 3.70.1.1 (Portland, Oregon), SPSSv. 21.0(IBM Corp, Armonk, NY, United States), and Graph Pad Prism 8.2.0 for MacBook (GraphPad Software, La Jolla, CA, United States).

Results

First phase: Preliminary analysis

For the Progress Test of 2019, the explained variance by items and persons were 12.6 and 3.9%, respectively. The eigenvalue of the first contrast was 3.7 with an explained variance of 2.6%. The root of the square mean = 0.0059. For the Progress Test of 2020, the explained variance by items and persons were 15.7 and 10.8%, respectively. The eigenvalue of the first contrast was 4.1 with an explained variance of 2.6%. The root of the square mean = 0.007. The variance explained by the items in the 2019 and 2020 Progress Test was higher than five times the variance explained by the first contrast and the explained variance in the first contrast was smaller than the variance explained by persons and items.

There were only two items in the 2020 Progress Test with an outfit, which included outliers in the fit parameter, higher than 1.50, but only one higher than 2.00. Since it was only one item, we decided to maintain the item for subsequent analysis. There were some person parameters (students) with a value higher than 2.00. Values of person parameters higher than 2.00 are more acceptable, and because of the low number, we included them in the analysis.

Regarding local independence, the highest correlation of the standardized residual was 0.31 for the 2019 test, and 0.41 for the 2020 test. For the 2019 progress test, the highest value of the Yen’3 Q3 index was 0.28, minimum of −0.26, and with mean of −0.008 (sd = 0.076). For the 2020 Progress Test, the highest value of the Yen’3 Q3 index was 0.46, minimum of −0.29, and with mean of −0.008 (sd = 0.082). All values were lower than 0.7, when considering the correlation between either the standardized residual or raw residuals. Therefore, the data fulfilled the Rasch model’s assumptions.

Second phase: Linking and equating

Since we conducted the concurrent equating, the parameters are presented as a single test. Both infit and outfit parameters are in the adequate range. Subsequently, the new value of the empirical slope was 0.98 and correlation of 0.79, suggesting that linking and equating was successful. Therefore, we could compare students’ knowledge from 2019 and 2020.

Third phase: Cohort comparison

In the comparison between the years of study within the same cohorts, we found significant differences in student’s scores (knowledge) across the years of study in 2019 (F = 596.1, p < 0.001) and 2020 (F = 148.9, p < 0.001); the students’ scores for each year of study progressively increased from the 1st to the 6th year of study. The post-hoc analysis revealed significant differences between each year of study, except by the 2nd and 3rd years in 2020 (Table 1).

TABLE 1
www.frontiersin.org

Table 1. Measures of knowledge calculated by the Rasch model for each year of study in 2019 and 2020.

In the analysis on the effect of year of study and cohort, we found a significant effect of year of study (F = 617.389, p < 0.001), cohort (F = 110.758, p < 0.001) and interaction between year of study and cohort (F = 33.293, p < 0.001). In the comparison of each year of study between the two cohorts (2019 and 2020), we found a significant difference between the cohorts in all years of study, except the first year. For the second year, the scores of students in 2020 was higher than in 2019, and for all the other years of study, the scores of students in 2020 was lower than in 2019 (Table 1; Figure 1).

Finally, the comparison of students’ scores across 2019 and 2020 (i.e, the first-year students in 2019 and the second-year students in 2020, and so on), showed significant differences in the preclinical years, but no differences in the clinical clerkship (5th and 6th) years. This suggests that the 5th year students’ scores in 2020 was similar to the 4th year in 2019 (and the 6th year students’ scores in 2020 was similar to the 5th year in 2019 (Figure 2).

FIGURE 2
www.frontiersin.org

Figure 2. Curves of measure of knowledge for each graduation year. The mean scores of the students in the clerkship years are statistically lower in 2020 compared to 2019.

In conclusion, we found that students’ scores (knowledge) in the 2020 cohort was below the expected result in the clerkship years, with a large reduction between the fifth- and sixth-year medical students’ scores. Further analysis demonstrated that the students did not show a significant increase in scores between the 2019 and 2020 cohorts from the 4th to 5th year and 5th to 6th year.

Discussion

Our study shows differences in knowledge scores that suggest a negative impact on knowledge acquisition by medical students as a result of the changes required in the educational program at both medical schools due to the COVID-19 pandemic, mainly during the clinical clerkship years. The students’ knowledge was significantly lower in 2020 than in 2019 (inter-individual comparisons) but also the curve of knowledge acquisition stabilized (intra-individual comparisons) in the clerkship years, with the student’s knowledge not increasing significantly between the 4th to 5th and the 5th to 6th years, indicating that there was a lack of knowledge acquisition during the clinical training. This finding is not surprising because during clinical training there is exposure to a variety of practical activities that are essential for the development of applied knowledge, which is assessed in the Progress Test.

Our findings have major implications for the competence of newly qualified doctors who qualified during the COVID-19 pandemic. The efforts to minimize the virus spread were undoubtedly initially necessary to keep society safe from the devastating consequences of the virus. However, after a more advanced understanding of the disease, many countries reopened primary and secondary schools for face-to-face teaching since there were increasing concerns about the major educational, well-being and social impact of school closures (Fantini et al., 2020; Kuhfeld et al., 2020; Sharfstein and Morphew, 2020). In contrast, many Universities, and medical schools, did not reopen for face-to-face teaching at the same time. An important consideration for medical schools was to keep their students safe at a time when COVID -19 was spreading in a variety of healthcare environments between acutely ill patients and healthcare staff (Yamey and Walensky, 2020; Leidner et al., 2021). This concern for safety had to be balanced with the need for students to have practical experience in managing acutely ill patients (Yamey and Walensky, 2020; Leidner et al., 2021). In many low- and middle-income countries, such as Brazil, final year students have a major responsibility for managing patients, including outpatient consultation, minor surgical procedures, and daily examination of inpatients. This clinical experience is also very important since several low- and middle-income countries, including Brazil, do not have a national licensing examination and standard to assure that graduating students have met minimum requirements before they enter practice, with the consequence that these students need to have adequate readiness for clinical practice. We recommend further research to identify if our findings from Brazil are also confirmed in other global contexts, especially low- and middle-income countries.

Our findings also raise important questions about the effectiveness of online teaching during clinical training, especially in low- and middle-income countries. In Brazil, like many low- and middle-income countries, clinical training is mainly focused on work-based training with a focus on developing applied knowledge in which medical students have high exposure to clinical practice, including taking responsibility for making a diagnosis, planning treatment, and performing technical procedures under supervision. During work-based training, students develop their applied knowledge, such as assessed in the Progress Test, by seeing large numbers of patients, with different presentations and approaches to management. Most online teaching in low- and middle-income countries, especially during the COVID 19 pandemic, cannot replicate the development of applied knowledge through clinical practice (Cecilio-Fernandes et al., 2020; Bastos et al., 2021). High- income countries are increasingly using immersive technologies, such as virtual reality, and live streamed clinical rounds to replicate clinical practice but this requires access to both affordable technology and development expertise but also adequate internet provision (Emanuel, 2020; Gaur et al., 2020). This is in contrast to low- and middle-income countries where the main approaches have been with simple synchronous activity, either as a face-to-face lecture or case-based discussion in which teachers create cases for discussion. Asynchronous activity has also mostly relied on recorded videos, reading material such as articles and books, and PowerPoint presentations (Cecilio-Fernandes et al., 2020; Bastos et al., 2021). Medical educators in low- and middle-income countries are likely to benefit from improved resources and training to ensure that the design and delivery of online clinical teaching can be more effective (Cecilio-Fernandes et al., 2020; Sandars et al., 2020).

Our study was limited in the extent to which it could identify whether the students that graduated at the end of 2020 are less competent in terms of professionalism, skills and attitudes than their predecessors. However, our study highlights that students finished their undergraduate training with a major deficit in their knowledge. Two concerns arise from this finding: first, how can the 2020 cohort of recently graduated doctors increase their knowledge? Second, since the pandemic is not yet fully resolved, how can medical schools make changes to ensure that current students can develop knowledge acquisition?

To overcome the deficits in knowledge, all recently-graduated doctors could be offered continuing professional education programs. An important approach for these doctors has been the implementation of supervised practice with increased clinical training (Choi et al., 2020). However, in several low- and middle-income countries, including Brazil, recently graduated doctors work independently. These doctors often provide urgent and emergency care or work in primary health care centers, often without supervision and the opportunity for further clinical training (Santos and Nunes, 2019). We recommend that an important priority in the near future is to ensure that there are systems for all newly graduated doctors that can allow them to continue with practical clinical teaching (Pravder et al., 2021).

A strength of our study is that it is the first to demonstrate the impact of the COVID 19 pandemic on students’ knowledge acquisition, especially in low- and middle-income settings. Additionally, the study presents data from two medical schools and by using an Item Response Theory approach we can reduce possible biases due to heterogeneity between the level of difficulty of the Progress Tests. However, our study has several limitations. First, we obtained data from only two medical schools that may not represent the context of medical education in Brazil but also in other low- and middle-income countries. However, most medical schools in these contexts were required to make similar changes to their clinical training during the COVID-19 pandemic Second, our Progress Test was administered once a year and we are aware that single measures provide information with less reliability than repeated measures. Finally, there was a methodological change in the test administration, from the computerized progress test instead of the traditional paper-based question book. However, we do not believe that this change significantly impacted the data since there are numerous studies demonstrating equivalence of both tests, even for high stake examinations (Hochlehnert et al., 2011; Boevé et al., 2015).

Conclusion

By comparing the students’ scores on the Progress Tests administered before and after the COVID 19 outbreak, we identified that the pandemic significantly impaired the knowledge acquisition of medical students, mainly in the clerkship years, where practical activities are the central part of training. There are major implications of this knowledge deficit for newly qualified doctors, especially in Brazil and other low- and middle-income countries. Online teaching also does not appear to satisfactorily replace real clinical training for medical students and we urgently recommend that all medical schools, especially in low- and middle-income countries, implement systems of continuing professional education for newly graduated doctors to develop their applied knowledge through further supervised clinical training.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

Ethical approval was not provided for this study on human participants because we dealt with secondary data and no student was identified, ethics committee approval was not necessary. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.

Author contributions

PH: conceptualization, data curation, funding acquisition, investigation, project administration, and writing—original draft. DC-F: conceptualization, formal analysis, methodology, and writing—original draft. LN: resources, visualization, and writing— review and editing. JS: methodology, supervision, visualization, and writing—review and editing. BA: supervision, visualization, validation, and writing—review and editing. AB: conceptualization, data curation, funding acquisition, project administration, resources, supervision, and writing—original draft All authors contributed to the article and approved the submitted version.

Funding

PH and AB have received an award from the National Board of Medical Examiners (PA, PA, United States). GRANT_NUMBER: Proposal LAG5-2020. https://contributions.nbme.org/about/latinamerica-grants. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. This research was partially funded by FAPESP – São Paulo Research Foundation (Young Investigator Grant number 2018/15642-1) awarded to Dario Cecilio-Fernandes. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ahmed, H., Allaf, M., and Elghazaly, H. (2020). COVID-19 and medical education. Lancet Infect. Dis. 20, 777–778. doi: 10.1016/S1473-3099(20)30226-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Alsoufi, A., Alsuyihili, A., Msherghi, A., Elhadi, A., Atiyah, H., Ashini, A., et al. (2020). Impact of the COVID-19 pandemic on medical education: medical students' knowledge, attitudes, and practices regarding electronic learning. PLoS One 15:e0242905. doi: 10.1371/journal.pone.0242905

PubMed Abstract | CrossRef Full Text | Google Scholar

Bączek, M., Zagańczyk-Bączek, M., Szpringer, M., Jaroszyński, A., and Wożakowska-Kapłon, B. (2021). Students' perception of online learning during the COVID-19 pandemic: a survey study of polish medical students. Medicine (Baltimore) 100:e24821. doi: 10.1097/MD.0000000000024821

PubMed Abstract | CrossRef Full Text | Google Scholar

Baghaei, P. (2008). Local dependency and Rasch measures. Rasch Meas Trans. 21, 1105–1106.

Google Scholar

Bastos, R., Carvalho, D., Brandao, C., Bergamasco, E., Sandars, J., and Cecilio-Fernandes, D. (2021). Solutions, enablers and barriers to online learning in clinical medical education during the first year of the Covid19 pandemic: a rapid review. Med. Teach. 44, 187–195. doi: 10.1080/0142159X.2021.1973979

CrossRef Full Text | Google Scholar

Bicudo, A. M., Hamamoto Filho, P. T., Abbade, J. F., Hafner, M. L. M. B., and Maffei, C. M. L. (2019). Consortia of cross-institutional Progress testing for all medical schools in Brazil. Rev. Bras. Edu. Med. 43, 151–156. doi: 10.1590/1981-52712015v43n4rb20190018

CrossRef Full Text | Google Scholar

Boevé, A. J., Meijer, R. R., Albers, C. J., Beetsma, Y., and Bosker, R. J. (2015). Introducing computer-based testing in high-stakes exams in higher education: results of a field experiment. PLoS One 10:e0143616. doi: 10.1371/journal.pone.0143616

PubMed Abstract | CrossRef Full Text | Google Scholar

Bond, TG, and Fox, CM. Applying the Rasch model: Fundamental measurement in the human sciences. (2001). Mahwah: Erlbaum.

Google Scholar

Cecilio-Fernandes, D, Parisi, M, Santos, T, and Sandars, J. (2020). The COVID-19 pandemic and the challenge of using technology for medical education in low and middle income countries. MedEdPublish. 9,[1],74.

Google Scholar

Choi, B., Jegatheeswaran, L., Minocha, A., Alhilani, M., Nakhoul, M., and Mutengesa, E. (2020). The impact of the COVID-19 pandemic on final year medical students in the United Kingdom: a national survey. BMC Med. Edu. 20:206. doi: 10.1186/s12909-020-02117-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Christensen, K. B., Makransky, G., and Horton, M. (2017). Critical values for Yen’s Q 3: identification of local dependence in the Rasch model using residual correlations. Appl. Psychol. Meas. 41, 178–194. doi: 10.1177/0146621616677520

PubMed Abstract | CrossRef Full Text | Google Scholar

Daniel, M., Gordon, M., Patricio, M., Hider, A., Pawlik, C., Bhagdev, R., et al. (2021). An update on developments in medical education in response to the COVID-19 pandemic: a BEME scoping review: BEME Guide No. 64. Med. Teach. 43, 253–271. doi: 10.1080/0142159X.2020.1864310

CrossRef Full Text | Google Scholar

De Champlain, A. F. (2010). A primer on classical test theory and item response theory for assessments in medical education. Med. Edu. 44, 109–117. doi: 10.1111/j.1365-2923.2009.03425.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Dost, S., Hossain, A., Shehab, M., Abdelwahed, A., and Al-Nusair, L. (2020). Perceptions of medical students towards online teaching during the COVID-19 pandemic: a national cross-sectional survey of 2721 UK medical students. BMJ Open 10:e042378. doi: 10.1136/bmjopen-2020-042378

PubMed Abstract | CrossRef Full Text | Google Scholar

Downing, S. M. (2003). Item response theory: applications of modern test theory in medical education. Med. Edu. 37, 739–745. doi: 10.1046/j.1365-2923.2003.01587.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Emanuel, E. J. (2020). The inevitable reimagining of medical education. JAMA 323, 1127–1128. doi: 10.1001/jama.2020.1227

CrossRef Full Text | Google Scholar

Fantini, M. P., Reno, C., Biserni, G. B., Savoia, E., and Lanari, M. (2020). COVID-19 and the re-opening of schools: a policy maker's dilemma. Ital. J. Pediatr. 46:79. doi: 10.1186/s13052-020-00844-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Gaur, U., Majumder, M. A. A., Sa, B., Sarkar, S., Williams, A., and Singh, K. (2020). Challenges and opportunities of preclinical medical education: COVID-19 crisis and beyond. SN Compr. Clin. Med. 2, 1992–1997. doi: 10.1007/s42399-020-00528-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Given, K., Hannigan, A., and McGrath, D. (2016). Red, yellow and green: what does it mean? How the progress test informs and supports student progress. Med. Teach. 38, 1025–1032. doi: 10.3109/0142159X.2016.1147533

PubMed Abstract | CrossRef Full Text | Google Scholar

Hamamoto Filho, P. T., Silva, E., Ribeiro, Z. M. T., Hafner, M. L. M. B., Cecilio-Fernandes, D., and Bicudo, A. M. (2020). Relationships between Bloom's taxonomy, judges' estimation of item difficulty and psychometric properties of items from a progress test: a prospective observational study. São Paulo Med. J. 138, 33–39. doi: 10.1590/1516-3180.2019.0459.r1.19112019

PubMed Abstract | CrossRef Full Text | Google Scholar

Hammoud, M. M., Standiford, T., and Carmody, J. B. (2020). Potential implications of COVID-19 for the 2020-2021 residency application cycle. JAMA 324, 29–30. doi: 10.1001/jama.2020.8911

PubMed Abstract | CrossRef Full Text | Google Scholar

Hochlehnert, A., Brass, K., Moeltner, A., and Juenger, J. (2011). Does medical students’ preference of test format (computer-based vs. paper-based) have an influence on performance? BMC Med. Edu. 11:89. doi: 10.1186/1472-6920-11-89

PubMed Abstract | CrossRef Full Text | Google Scholar

Kronenfeld, J. P., Ryon, E. L., Kronenfeld, D. S., Hui, V. W., Rodgers, S. E., Thorson, C. M., et al. (2020). Medical student education during COVID-19: electronic education does not decrease examination scores. Am. Surg. 29:3134820983194. doi: 10.1177/0003134820983194

CrossRef Full Text | Google Scholar

Kuhfeld, M., Soland, J., Tarasawa, B., Johnson, A., Ruzek, E., and Liu, J. (2020). Projecting the potential impact of COVID-19 school closures on academic achievement. Edu. Res. 49, 549–565. doi: 10.3102/0013189X20965918

CrossRef Full Text | Google Scholar

Lazarus, G., Findyartini, A., Putera, A. M., Gamalliel, N., Nugraha, D., Adli, I., et al. (2021). Willingness to volunteer and readiness to practice of undergraduate medical students during the COVID-19 pandemic: a cross-sectional survey in Indonesia. BMC Med. Edu. 21:138. doi: 10.1186/s12909-021-02576-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Leidner, A. J., Barry, V., Bowen, V. B., Silver, R., Musial, T., Kang, G. J., et al. (2021). Opening of large institutions of higher education and county-level COVID-19 incidence - United States, July 6-September 17, 2020. MMWR Morb. Mortal. Wkly Rep. 70, 14–19. doi: 10.15585/mmwr.mm7001a4

PubMed Abstract | CrossRef Full Text | Google Scholar

Linacre, J. (1994). Sample size and item calibration stability. Rasch Meas Trans. 7:328.

Google Scholar

Lucey, C. R., and Johnston, S. C. (2020). The transformational effects of COVID-19 on medical education. JAMA 324, 1033–1034. doi: 10.1001/jama.2020.14136

PubMed Abstract | CrossRef Full Text | Google Scholar

Muijtjens, A. M., Schuwirth, L. W., Cohen-Schotanus, J., and van der Vleuten, C. P. (2008). Differences in knowledge development exposed by multi-curricular progress test data. Adv. Health Sci. Edu. Theory Pract. 13, 593–605. doi: 10.1007/s10459-007-9066-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Pravder, H. D., Langdon-Embry, L., Hernandez, R. J., Berbari, N., Shelov, S. P., and Kinzler, W. L. (2021). Experiences of early graduate medical students working in New York hospitals during the COVID-19 pandemic: a mixed methods study. BMC Med. Edu. 21:118. doi: 10.1186/s12909-021-02543-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Rose, S. (2020). Medical student education in the time of COVID-19. JAMA 323, 2131–2132. doi: 10.1001/jama.2020.5227

CrossRef Full Text | Google Scholar

Sandars, J., Correia, R., Dankbaar, M., de Jong, P., Goh, P. S., Hege, I., et al. (2020). Twelve tips for rapidly migrating to online learning during the COVID-19 pandemic. MedEdPublish 29:9. doi: 10.15694/mep.2020.000082.1

CrossRef Full Text | Google Scholar

Santos, R. A., and Nunes, M. P. T. (2019). Medical education in Brazil. Med. Teach. 41, 1106–1111. doi: 10.1080/0142159X.2019.1636955

CrossRef Full Text | Google Scholar

Schilling, S. G. (2007). The role of psychometric modeling in test validation: an application of multidimensional item response theory. Measurement 5, 93–106. doi: 10.1080/15366360701487021

CrossRef Full Text | Google Scholar

Schmidmaier, R., Holzer, M., Angstwurm, M., Nouns, Z., Reincke, M., and Fischer, M. R. (2010). Using the Progress test Medizin (PTM) for evaluation of the medical curriculum Munich (MeCuM). GMS Z. Med. Ausbild. 27. doi: 10.3205/zma000707

CrossRef Full Text | Google Scholar

Schuwirth, L. W., and van der Vleuten, C. P. (2012). The use of progress testing. Pers. Med Edu. 1, 24–30. doi: 10.1007/s40037-012-0007-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Sharfstein, J. M., and Morphew, C. C. (2020). The urgency and challenge of opening K-12 schools in the fall of 2020. JAMA 324, 133–134. doi: 10.1001/jama.2020.10175

PubMed Abstract | CrossRef Full Text | Google Scholar

Tennant, A., and Pallant, J. F. (2006). Unidimensionality matters! (a tale of two Smiths?). Rasch Meas Trans. 20, 1048–1051.

Google Scholar

Theoret, C., and Ming, X. (2020). Our education, our concerns: the impact on medical student education of COVID-19. Med. Edu. 54, 591–592. doi: 10.1111/medu.14181

PubMed Abstract | CrossRef Full Text | Google Scholar

Wright, B., and Linacre, J. (1994). Reasonable mean-square fit values. Rasch Meas Trans. 8:370.

Google Scholar

Yamey, G., and Walensky, R. P. (2020). Covid-19: re-opening universities is high risk. BMJ 370:m3365. doi: 10.1136/bmj.m3365

CrossRef Full Text | Google Scholar

Yu, C. H., and Osborn-Popp, S. E. (2005). Test equating by common items and common subjects: concepts and applications. Pract. Assess. Res. Eval. 10:4. doi: 10.7275/68dy-z131

CrossRef Full Text | Google Scholar

Keywords: medical education, progress testing, COVID-19, assessment, knowledge acquisition

Citation: Hamamoto Filho PT, Cecilio-Fernandes D, Norcia LF, Sandars J, Anderson MB and Bicudo AM (2022) Reduction in final year medical students’ knowledge during the COVID-19 pandemic: Insights from an interinstitutional progress test. Front. Educ. 7:1033732. doi: 10.3389/feduc.2022.1033732

Received: 31 August 2022; Accepted: 26 October 2022;
Published: 10 November 2022.

Edited by:

Robbert Smit, St. Gallen University of Teacher Education, Switzerland

Reviewed by:

Carlos Fernando Collares, Maastricht University, Netherlands
Harrison Pravder, Yale-New Haven Hospital, United States

Copyright © 2022 Hamamoto Filho, Cecilio-Fernandes, Norcia, Sandars, Anderson and Bicudo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Pedro Tadao Hamamoto Filho, pedro.hamamoto@unesp.br

These authors have contributed equally to this work

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.