Skip to main content

ORIGINAL RESEARCH article

Front. Med., 01 August 2024
Sec. Healthcare Professions Education

Evaluating pre-anesthesia assessment performance in residency: the reliability of standardized patient methods

Emmanuel Besnier,Emmanuel Besnier1,2Sbastien FranchinaSébastien Franchina1Antoine Lefevre-ScellesAntoine Lefevre-Scelles1Thierry WableThierry Wable3Jean-Luc HanouzJean-Luc Hanouz4Etienne AllardEtienne Allard5Bertrand DureuilBertrand Dureuil1Vincent Compre,
Vincent Compère1,2*
  • 1Department of Anesthesia and Intensive Care, Rouen University Hospital, Rouen, France
  • 2Normandie Univ, UNIROUEN, Inserm, Mont-Saint-Aignan, France
  • 3UFR Médecine Pharmacie, Université de Normandie, Rouen, France
  • 4Department of Anesthesia and Intensive Care, Caen University Hospital, Caen, France
  • 5Department of Anesthesia, Le Havre Hospital, Le Havre, France

Background: Pre-anesthesia assessment clinic (PAC) is known to increase safety and quality in the perioperative period. However, PAC teaching during anesthesiology residency is a challenge. The objective of this study was to assess the reliability of a simulation score grid using a standardized patient on the PAC performance of anesthesiology residents.

Methods: A score grid, including the 4 components of the PAC (clinical evaluation, perioperative strategy, information and communication) was validated by a group of 5 senior anesthesiologists. Anesthesiology residents (> one year) and attending anesthesiologists were included. The same simulation sequence with the same standardized patient was conducted in a simulation dedicated consultation room. The simulation sequence was followed by a debriefing session with the 2 professors (anesthesiology and communication) and each anesthesiology resident. The main outcome was the overall grid score out of a maximum score of 300 and the correlation of this score with experience in anesthesiology residency. Secondary outcomes were individual component scores according to level of experience in anesthesiology.

Results: Between October 2014 and April 2016, 109 anesthesiology residents and 16 attending anesthesiologists were included in this prospective bicentric study. There was a positive correlation (p < 0.01) between level of experience and overall score on the grid score (Pearson’s Coefficient = 0.52). The Pearson correlation coefficient between overall assessment and level of experience in anesthesiology was 0.46 (p < 0.01). The analysis of the sub-scores for the 4 components of the overall score (evaluation, perioperative strategy, information and communication) also identify differences between groups of experience.

Conclusion: Standardized patient Simulation of PAC seems to be a reliable tool to assess PAC performance in anesthesiology residents and senior anesthesiologists. These results suggest standardized patient simulation could be used as a teaching tool for PAC.

Highlights

• Question: Is simulation tool using a standardized patient on the preanesthesia consultation assessing the performance of anesthesiology residents?

• Finding: There was a positive correlation (p < 0.01) between level of resident’s experience and overall score on the grid score (Pearson’s Coefficient = 0.52).

• Meaning: Standardized patient simulation of PAC seems to be a reliable tool to assess PAC performance in anesthesiology residents and attending anesthesiologists.

1 Introduction

Pre-anesthesia Assessment Clinic (PAC) has been developed to assess preoperative risks related to surgery or patient illness, to select laboratory tests, and to consider the best perioperative strategy and anesthesia techniques. Some guidelines have been edited to ensure quality and reliability of this assessment (1, 2). PAC is associated with lower perioperative morbi-mortality (3, 4), preoperative optimization of patients (5), lower costs related to fewer surgical cancelations or reduced length of stay (6) and lower costs related to fewer preoperative tests (7). Conversely, inadequate PAC can lead to incidents during and after surgery (8). In addition, PAC is an appropriate time to educate patients on anesthesia, perioperative care and pain treatments, to reduce anxiety, to develop care plans and to obtain informed consent (9, 10).

The PAC could be divided in two stages (1) an interview about the patient’s medical, anesthetic, surgical or allergic history and personal medication with a physical evaluation including venous or upper airway access, and (2) a discussion about adequate perioperative management and medical risks to obtain the patient’s informed consent. PAC is a difficult exercise and requires different physician skills including medical but also non-technical skills as communication or organization (11). Preoperative Assessment Clinic (PAC) is indeed a complex exercise that necessitates a wide range of physician skills, including not only medical expertise but also non-technical skills such as communication and organization (11). At each stage of the PAC process, unique challenges arise that demand specific skills to ensure optimal patient outcomes (12). During the initial patient evaluation, effective communication skills are crucial in obtaining an accurate medical history and understanding the patient’s concerns and expectations (13). Physicians must be able to actively listen, ask pertinent questions, and provide clear, concise explanations to establish trust and rapport with the patient. Poor communication at this stage can lead to misunderstandings, misdiagnosis, and inappropriate treatment plans. The next stage involves preoperative testing and risk assessment, where organizational skills come into play. Physicians must efficiently coordinate various diagnostic tests and consultations, ensuring that all necessary information is obtained in a timely manner. They must also be adept at interpreting test results and assessing the patient’s risk profile to make informed decisions regarding perioperative management. Challenges at this stage may include managing time constraints, dealing with incomplete or conflicting data, and navigating complex medical conditions. In the preoperative optimization phase, medical expertise is paramount. Physicians must be knowledgeable about various medical conditions and their potential impact on surgical outcomes (4). They must also be skilled in prescribing appropriate interventions to optimize the patient’s medical status and minimize perioperative risk. Challenges at this stage may include managing co-morbidities, balancing the risks and benefits of various interventions, and dealing with patient non-compliance. Finally, the perioperative planning stage requires a combination of medical, communication, and organizational skills. Physicians must effectively communicate their findings and recommendations to the surgical team, coordinate care with other healthcare providers, and develop a comprehensive perioperative plan that addresses the patient’s unique needs and preferences (14). Challenges at this stage may include navigating interdisciplinary dynamics, managing conflicting opinions, and ensuring that all relevant information is accurately documented and communicated.

PAC learning during anesthesia residency is not well defined regarding either its objectives or its practical realization. Moreover, PAC teaching involves self-learning with or without oral guidance by senior anesthesiologists based on their own experience. Published data about teaching are scarce and the interest of one existing report on problem-based learning for PAC is not clear (15). We previously conducted a French national survey on anesthesiology residents’ and teachers’ opinions of PAC teaching (16). Residents described PAC as a major and difficult act performed mostly alone and considered PAC teaching in France to be insufficient; teachers considered that PAC teaching could be improved using simulation.

Based on the approach of the objective structured clinical examination (OSCE) (17), standardized patient simulation score grid could be a reliable tool to evaluate pre-anesthesia assessment performance (18). Several studies have explored various aspects of validity of the Objective Structured Clinical Examination (OSCE), including content, response process, internal structure, relations with other variables, and consequences. Hodges et al. showed that the test items are representative of the skills and knowledge being assessed (19). The response process was investigated by Daniels et al. who focused on the coherence and consistency of the data collected during the examination (20). The consequences of OSCEs on learners, instructors, and the curriculum are well described, highlighting the impact and implications of these examinations in medical education (21). This kind of tool has been described to be a valuable learning approach for assessment of medical students and can be used for formative assessment by association of a personalized debriefing session (22). OSCE has also been described as a valuable and reliable tool for communication assessment (2325). In this work, we describe the development and the evaluation of standardized patient simulation score grid for PAC. The main objective of this study was to assess the reliability of this tool on the PAC performance of anesthesiology residents.

2 Materials and methods

2.1 Ethics statement

This research project has received the approval of the Ethics Committee for Non-Interventional Research of Rouen University Hospital (N° E2014-18). The requirement for written informed consent was waived by the Committee. All participants were informed beforehand of the principle of the simulation session and its objectives, as well as the potential interest for training. The presence of an audio-visual system for the real-time retransmission of the session without recording or image processing as well as the presence of professors was reported. All participants have given oral agreement to participate. In the event of occasional image retention, written consent for the image has been signed by each participant.

2.2 Description of the study

Based on the principle of OSCE, we developed a standardized patient simulation adapted to PAC. There were not several successive stations but only one, with an overall duration of 20 to 30 min. This choice was made so as not to artificially fragment the individual components of this consultation which are in common practice all interdependent. We followed the Association for Medical Education in Europe (AMEE) guidelines for the development of the tool, the scenario, the scorecard and for the evaluation of this tool (22).

2.3 Population

We conducted a prospective bicentric study, including anesthesiology residents (128 students eligible) and seniors (16 seniors on a voluntary basis) from Rouen and Caen University Hospitals. All participants had the same session, in the same place, with the same simulated and standardized patient. No previous specific training had been dedicated to consultation.

2.4 Conducting the sessions

Each participant took part in the evaluation only once. Participants were individually summoned to the medical office of Rouen University’s school of medicine, comprising 2 adjoining rooms:

– A standard clinical room with a desk, a computer (non-functioning), a telephone, basic clinical examination equipment (examination table, stethoscope, blood pressure monitor) and realistic decorations as well as blank paper supports. The specificity of this room is the presence of a high-performance microphone fixed to the ceiling above the desk and a very high definition-rotating camera fixed to the wall allowing an overall view of the room.

– Another room with a 120 cm flat TV screen fixed to the wall, a telephone and an instant audio-visual broadcasting system via the TV screen and 2 loudspeakers fixed to the ceiling. This system allows real-time transmission from the adjoining clinical room. It also allows the recording and processing of images and sound, only when a single computer with dedicated software is connected.

A short fact sheet recalling the clinical context was placed in the clinical room. The instruction given was to perform PAC as usual. Then the participant went to fetch the standardized patient who was waiting in the corridor and conducted PAC using the information provided. A maximum of 20 min was allowed. A neutral paper support was given. The intervention of observers during PAC was allowed in a strictly exceptional manner and only in the event of significant blocking of the situation or of manifest delay.

During each simulation session, the same anesthesiology (VC) and communication (TW) professors observed the sequence and filled an evaluation form with a dedicated score grid with a maximum of 300 points.

A subjective qualitative evaluation of the session represented by the letters A to E (A = very good, B = good, C = average, D = insufficient, E = extremely insufficient), according to the simulated patient’s report (the same female for all the cohort). Immediately after the PAC session and before any comment or other communication, in order to avoid any influence, the simulated patient entered his/her subjective notation in the grid and then reported back to the observers in the absence of the participant.

Finally, the anesthesiology and the communication professors carried out a 10–15 min personalized debriefing session with each resident highlighting the medical, behavioral, communication or information components, in a voluntarily positive and advisory spirit, avoiding any judgment.

2.5 Construction of the standardized scenario and simulated patient training

We constructed a moderately difficult standardized scenario based on real facts, accessible even to inexperienced young residents. The standardized patient was a female patient scheduled for knee arthroscopy and with clinically predictive criteria for difficult ventilation and intubation. This case scenario was associated with paper supports (prescription, laboratory examinations or specialized consultation reports) in order to increase immersion in the scenario. We recruited one standardized volunteer patient, unpaid, naive of medical skills and totally unknown to the participants in the simulation. She received a precise presentation of the work and the sessions, an explanation of the scenario and the information necessary to assimilate the proposed situation, as well as instructions on desired behavior that could be modulated. A single standardized patient was recruited and trained for this scenario, in order to conserve reproducibility of sessions and homogeneity of responses, because the same situation was proposed to all participants.

2.6 Construction of the score grid

Using the quality criteria described above (22), we developed a binary (yes-no) rating grid, subdivided into 4 parts, corresponding to the 4 main objectives of PAC: Evaluation / Strategy / Information / Communication. It consisted of 83 positive items (evaluation = 41, strategy = 17, information = 20, communication = 5), the weighting of which was adjusted according to the a priori importance of each item and the grid contained 6 negative items on major elements or guidelines. The maximum score for this grid was 300 points (evaluation = 116, strategy = 72, information = 59, communication = 53, maximum negative points = −160). The minimum score was 0 even in the case of a negative score. This score grid was validated by a consensus of 5 anesthesiologists (2 professors of anesthesiology and 3 seniors anesthesiologists).

2.7 Satisfaction survey of residents

Following each session of PAC simulation with personalized debriefing, an e-mail satisfaction survey was sent to all participating anesthesiology residents. This self-questionnaire was edited using Google Forms© software. It consisted of 10 multiple choice questions and required 3 min to answer.

2.8 Data collected

The data collected for each participant during these simulation sessions were:

– Total and partial score (based on the 4 PAC components) on the score grid.

– Rating of the qualitative assessment (A B C D or E) by simulated patient.

– The level of experience in anesthesiology of each participant.

– Participants’ opinions on the value of PAC simulation for assessment/training in anesthesia consultation.

2.9 Study outcomes

The primary outcome was the performance assessed by the grid score according to the experience of anesthesiology residents.

Secondary outcomes included:

– Results for each component (evaluation, strategy, information, communication) according to the experience in anesthesiology.

– Measurement of the 7 applicable criteria of validity and reliability of session according to the AMEE (22).

– Satisfaction of the simulation session from anesthesiology residents on a 10 points scale.

2.10 Statistical analysis

It was not possible to calculate an a priori number of subjects required because of the pilot nature of this study. This was arbitrarily set at a minimum of 100 subjects with a minimum of 15 residents per year (5 years to validate the French anesthesiology residency). Data are presented as median and interquartile range. Because the distribution of the population was normal as explored using a Shapiro–Wilk test, an Analysis of Variance (ANOVA one-way) test was used for quantitative data. In case of significant results for the overall test, a post-test was performed to explore differences between groups (Turkey’s multiple analyses). Qualitative data were compared using a Chi2 test. The correlation between experience of residents in years was correlated with the overall grid score using a Pearson’s correlation test, and r value were presented with 95% confidence intervals. p < 0.05 was considered significant for all these analyses. Because experience of the seniors may be heterogeneous, we did not correlate their experience with their overall scores. Finally, the scoring grid was subjected to the Cronbach alpha coefficient test to assess its internal validity. All analyses were performed using graphpad prism v8.0 (La Jola, USA).

3 Results

Between October 2014 and April 2016, 125 anesthesiologists (16 seniors and 109 residents were included), with 57% from Rouen University Hospital and 43% from Caen University Hospital (Figure 1). This geographical distribution ensures a diverse representation of anesthesiology training within the region.

Figure 1
www.frontiersin.org

Figure 1. Flow chart diagram.

The score is presented in Figure 2 according to the experience of anesthesiologist. There was a correlation between the score and the experience of the residents in years (r = 0.43 IC95% [0.26–0.57], p < 0.0001). The analysis of the sub-scores for the 4 components of the overall score (evaluation, perioperative strategy, information and communication) is presented in Table 1. According to AMEE 2010 guidelines, seven reliability criteria were analyzed (Table 2).

Figure 2
www.frontiersin.org

Figure 2. Grid score according to level of experience in anesthesiology. The score was presented with median and first-third interquartile and minimal and maximal scores according to level of experience in anesthesiology (one category for each of the 5 years of residency and a category of senior anesthesiologist). *First year versus others; †second year versus others; ‡third year versus others. No difference was observed for 4 and 5 years between each other and versus seniors. *p < 0.05; **p < 0.01; ****p < 0.0001. Analysis was performed using an ANOVA test with Dunn’s post-test.

Table 1
www.frontiersin.org

Table 1. Median scores with first and third quartiles for the 4 different components of the paranesthesia assessment grid according to clinical experience.

Table 2
www.frontiersin.org

Table 2. Criteria of validity and reliability of OSCE according to the Association of Medical Education in Europe guidelines published in 2010 (26).

The satisfaction (70% response rate) for the session was 8 (8, 9) with a relevance of the training of 9 (810).

4 Discussion

The standardized patient (simulated patient) is a type of human simulation that uses a well-trained healthy person (actor) to play the role of the real patient with stimulating his physical condition wherein the trainees (students) can train on the medical skills (27). Our work shows that the performance of anesthesiology residents in PAC evaluated with a grid score in a context of standardized patient session is correlated with their level of experience in anesthesiology. This result is in agreement with several works published in other medical specialties. In general practice, Hodges et al. showed in 42 students and physicians, an increase in the evaluation according to experience (28). In the same field, Prislin et al. reported a good in 335 general medical students (29). In emergency medicine, Wallenstein et al. confirmed the validity of score grid to represent the performance of 239 students (30). Finally, in neurology, Lukas et al. in 195 students at the end of the curriculum, found a correlation between a grid score performance and results at the final exams (31).

Medical communication and patient information are main factors of satisfaction in anesthesia (32, 33). Bondy et al. showed that the information delivered during PAC allowed a reduction of anxiety in patients (34). Similarly, Soltner et al. suggested that the attitude of the anesthesiologist in consultation helped to reduce this anxiety (10). Regarding this relational aspect, simulation has already been used especially in geriatrics for formative evaluation focused on communication skills. Indeed, Lauren et al., with 19 students and Ishikawa et al. with 85 residents, suggested the tool’s usefulness for training in verbal or non-verbal communication skills (35, 36). Finally, O’Sullivan et al. proposed the use of OSCE and simulation for the assessment and training of relational skills (37). Similarly, in our work, information and communication seem to be the most difficult dimensions of CPA to grasp., especially by the youngest residents. The pivotal role of medical communication and patient information in enhancing patient satisfaction and reducing anxiety within anesthesia settings cannot be overstated. Effective communication skills and positive attitudes on the part of anesthesiologists can significantly alleviate patient anxiety and improve overall satisfaction with care (38, 39). This underscores the importance of fostering psychological well-being among medical professionals, as this can translate into more compassionate and effective patient care. A recent study highlighted the significance of psychological factors and their relationship with academic performance of veterinary students (40). This study suggests that psychological well-being not only impacts academic success but also has profound implications for professional interactions, including those in medical settings. By cultivating strong communication skills and empathetic attitudes, anesthesiologists can create a supportive and reassuring environment for patients, thereby enhancing their overall experience and reducing preoperative anxiety. The integration of psychological factors into our understanding of medical communication and patient satisfaction offers a more holistic perspective on the role of anesthesiologists in patient care. By recognizing the importance of psychological well-being and its impact on professional interactions, we can better appreciate the value of effective communication and empathetic attitudes in enhancing patient outcomes and improving the quality of care in anesthesia settings.

In our study, the level of satisfaction of the anesthesiology residents of the PAC simulation was high. This is in agreement with numerous data from the literature in different medical specialties (4143). Specifically in anesthesia, Jindal et al. showed the good adhesion and satisfaction of the residents who had OSCE during their studies (44). Despite all this, feedback from our PAC simulation participants revealed a disruption in their classic consultation pattern, probably due to the neutral written support provided and not directive support. This choice was made so as not to introduce bias between the participants by using a preexisting anesthesia form and therefore potentially known to some of the participants. Also, PAC performance should not be dependent on the support used, which is in fact only a tool to help medical evaluation, and remains only one component among others during PAC. Simmonds et al. suggested a slight improvement in the completeness of this evaluation by providing directive support (45). Similarly in 2002, Ausset et al. about 964 retrospective records showed that the standardization of patient evaluation by a written guide improves the completeness of this evaluation without prejudging the impact on patient outcomes (46). However, our results show that the evaluation scores were rather low in the different groups and that the participants were not generally in difficulty on this point.

Although 125 anesthesiologists were included, the present study is not without limitations. First, our score grid was developed and validated by 5 senior anesthesiologists from the same university hospital which could introduce an evaluation bias. Nevertheless, the PAC components included in our grid are in line with those proposed by the ASA Task Force on Preoperative Evaluation or by the Brazilian teachers’ college, and those included in a literature review by Klafta and Roizen for information and relational aspects (1, 2, 47, 48). So our grid integrates collecting medical history, treatment and physical examination of the patient; targeting patient-adapted anesthetic strategy issues and information related to anesthesia; selecting additional tests to be performed depending on the field and surgery; inform the patient about the anesthetic process and its risks; obtaining informed consent and positive communication and interaction with the patient. On the other hand, our PAC simulation meets the majority of quality criteria published by AMEE except for inter-grade discrimination. This weakness is most likely related to the presence of a few very low scores which lead to a measurement bias and an increase in the linear regression slope between the two evaluation methods (the score grid versus overall qualitative evaluation). According to the AMEE, these low scores could be excluded from the analyses in order to avoid an excessive impact on the qualitative criteria of the tool. This makes it possible to moderate the negative impact of this criterion on the overall quality of our simulation. Finally, like all simulation tools, an important limit to the development and dissemination of OSCE is linked to the time, architecture or logistic preparation required for its implementation.

5 Conclusion

Standardized patient Simulation of PAC seems to be a reliable tool to assess PAC performance in anesthesiology residents and senior anesthesiologists. These results suggest standardized patient simulation could be used as a teaching tool for PAC. We used a single scenario. The grid can of course be adapted to other clinical cases. This work proposes a grid that can serve as a basis for all schools of anesthesia to teach consultation, which remains a difficult exercise for residents. Standardized patient simulation with therefore seems to be an extremely relevant approach for improving the performance of students in CPA, but this aspect will have to be validated by other studies.

Author’s note

Preliminary data for this study were presented as an oral communication under the reference R422 at the French Society of Anaesthesiology and Critical Care (SFAR) Congress, 22-24 September 2016, Paris.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

This research project has received the approval of the Ethics Committee for Non-Interventional Research of Rouen University Hospital (N° E2014-18). The requirement for written informed consent was waived by the Committee. All participants were informed beforehand of the principle of the simulation session and its objectives, as well as the potential interest for training. The presence of an audio-visual system for the real-time retransmission of the session without recording or image processing as well as the presence of professors was reported. All participants have given oral agreement to participate. In the event of occasional image retention, written consent for the image has been signed by each participant.

Author contributions

EB: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. SF: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Writing – original draft. AL-S: Validation, Visualization, Writing – review & editing. TW: Methodology, Resources, Writing – review & editing. J-LH: Validation, Writing – review & editing. EA: Validation, Writing – review & editing. BD: Writing – review & editing. VC: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

The authors would like to thank Nikki Sabourin-Gibbs, Rouen University Hospital, for her help in editing the manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abbreviations

OSCE, Objective structured clinical examination; PAC, Pre-anesthesia assessment clinic.

References

1. American Society of Anesthesiologists Task Force on Preanesthesia Evaluation. Practice advisory for preanesthesia evaluation: a report by the American Society of Anesthesiologists Task Force on Preanesthesia evaluation. Anesthesiology. (2002) 96:485–96. doi: 10.1097/00000542-200202000-00037

Crossref Full Text | Google Scholar

2. Klafta, JM, and Roizen, MF. Current understanding of patients’ attitudes toward and preparation for anesthesia: a review. Anesth Analg. (1996) 83:1314–21.

PubMed Abstract | Google Scholar

3. Holland, R. Anaesthetic mortality in New South Wales. Br J Anaesth. (1987) 59:834–41. doi: 10.1093/bja/59.7.834

Crossref Full Text | Google Scholar

4. Blitz, JD, Kendale, SM, Jain, SK, Cuff, GE, Kim, JT, and Rosenberg, AD. Preoperative evaluation clinic visit is associated with decreased risk of in-hospital postoperative mortality. Anesthesiology. (2016) 125:280–94. doi: 10.1097/ALN.0000000000001193

PubMed Abstract | Crossref Full Text | Google Scholar

5. Haskins, IN, Amdur, R, and Vaziri, K. The effect of smoking on bariatric surgical outcomes. Surg Endosc. (2014) 28:3074–80. doi: 10.1007/s00464-014-3581-z

Crossref Full Text | Google Scholar

6. McKendrick, DRA, Cumming, GP, and Lee, AJ. A 5-year observational study of cancellations in the operating room: does the introduction of preoperative preparation have an impact? Saudi J Anaesth. (2014) 8:S8–S14. doi: 10.4103/1658-354X.144053

PubMed Abstract | Crossref Full Text | Google Scholar

7. Roizen, MF. More preoperative assessment by physicians and less by laboratory tests. N Engl J Med. (2000) 342:204–5. doi: 10.1056/NEJM200001203420311

PubMed Abstract | Crossref Full Text | Google Scholar

8. Kluger, MT, Tham, EJ, Coleman, NA, Runciman, WB, and Bullock, MF. Inadequate pre-operative evaluation and preparation: a review of 197 reports from the Australian incident monitoring study. Anaesthesia. (2000) 55:1173–8. doi: 10.1046/j.1365-2044.2000.01725.x

PubMed Abstract | Crossref Full Text | Google Scholar

9. Barneschi, MG, Miccinesi, G, Marini, F, Bressan, F, and Paci, E. Informing patients about risks and complications of anaesthesia. Minerva Anestesiol. (2002) 68:818–23.

Google Scholar

10. Soltner, C, Giquello, JA, Monrigal-Martin, C, and Beydon, L. Continuous care and empathic anaesthesiologist attitude in the preoperative period: impact on patient anxiety and satisfaction. Br J Anaesth. (2011) 106:680–6. doi: 10.1093/bja/aer034

PubMed Abstract | Crossref Full Text | Google Scholar

11. Compère, V, Froemer, B, Clavier, T, Selim, J, Burey, J, Dureuil, B, et al. Evaluation of the duration of Preanesthesia consultation: prospective and multicenter study. Anesth Analg. (2022) 134:496–504. doi: 10.1213/ANE.0000000000005889

PubMed Abstract | Crossref Full Text | Google Scholar

12. Lemmens, LC, van Klei, WA, Klazinga, NS, Rutten, CLG, van Linge, RH, Moons, KGM, et al. The effect of national guidelines on the implementation of outpatient preoperative evaluation clinics in Dutch hospitals. Eur J Anaesthesiol. (2006) 23:962–70. doi: 10.1017/S0265021506000895

PubMed Abstract | Crossref Full Text | Google Scholar

13. Cyna, AM (2018). A GREAT interaction and the LAURS of communication in anesthesia. Available at: https://besarpp.adagiocreate.com/wp-content/uploads/2021/04/01-cina.pdf

Google Scholar

14. Selim, J, Djerada, Z, Chaventre, C, Clavier, T, Dureuil, B, Besnier, E, et al. Preoperative analgesic instruction and prescription reduces early home pain after outpatient surgery: a randomized controlled trial. Can J Anaesth J Can Anesth. (2021)

Google Scholar

15. Carrero, E, Gomar, C, Penzo, W, and Rull, M. Comparison between lecture-based approach and case/problem-based learning discussion for teaching pre-anaesthetic assessment. Eur J Anaesthesiol. (2007) 24:1008–15. doi: 10.1017/S0265021506002304

PubMed Abstract | Crossref Full Text | Google Scholar

16. Franchina, S, Besnier, E, Veber, B, Dureuil, B, and Compère, V. Enseignement de la consultation préanesthésique: enquête nationale auprès des internes et des enseignants. Anesth Réanimation. (2019) 5:461–74. doi: 10.1016/j.anrea.2019.06.004

Crossref Full Text | Google Scholar

17. Harden, RM, Stevenson, M, Downie, WW, and Wilson, GM. Assessment of clinical competence using objective structured examination. Br Med J. (1975) 1:447–51. doi: 10.1136/bmj.1.5955.447

PubMed Abstract | Crossref Full Text | Google Scholar

18. Elshama, S. How to use simulation in medical education. (2016). Available at: https://www.researchgate.net/publication/300095922_How_to_Use_Simulation_in_Medical_Education

Google Scholar

19. Hodges, BD. The objective structured clinical examination: three decades of development. J Vet Med Educ. (2006) 33:571–7. doi: 10.3138/jvme.33.4.571

PubMed Abstract | Crossref Full Text | Google Scholar

20. Daniels, VJ, and Pugh, D. Twelve tips for developing an OSCE that measures what you want. Med Teach. (2018) 40:1208–13. doi: 10.1080/0142159X.2017.1390214

PubMed Abstract | Crossref Full Text | Google Scholar

21. Al-Hashimi, K, Said, UN, and Khan, TN. Formative objective structured clinical examinations (OSCEs) as an assessment tool in UK undergraduate medical education: a review of its utility. Cureus. (2023) 15:e38519. doi: 10.7759/cureus.38519

PubMed Abstract | Crossref Full Text | Google Scholar

22. Khan, KZ, Gaunt, K, Ramachandran, S, and Pushkar, P. The objective structured clinical examination (OSCE): AMEE guide no. 81. Part II: organisation & administration. Med Teach. (2013) 35:e1447–63. doi: 10.3109/0142159X.2013.818635

PubMed Abstract | Crossref Full Text | Google Scholar

23. Piumatti, G, Cerutti, B, and Perron, NJ. Assessing communication skills during OSCE: need for integrated psychometric approaches. BMC Med Educ. (2021) 21:106. doi: 10.1186/s12909-021-02552-8

PubMed Abstract | Crossref Full Text | Google Scholar

24. Setyonugroho, W, Kennedy, KM, and Kropmans, TJB. Reliability and validity of OSCE checklists used to assess the communication skills of undergraduate medical students: a systematic review. Patient Educ Couns. (2015):00277–3.

Google Scholar

25. Cömert, M, Zill, JM, Christalle, E, Dirmaier, J, Härter, M, and Scholl, I. Assessing communication skills of medical students in objective structured clinical examinations (OSCE) – a systematic review of rating scales. PLoS One. (2016) 11:e0152717. doi: 10.1371/journal.pone.0152717

PubMed Abstract | Crossref Full Text | Google Scholar

26. Pell, G, Fuller, R, Homer, M, and Roberts, T. How to measure the quality of the OSCE: A review of metrics - AMEE guide no. 49, International Association for Medical Education, Med Teach. (2010) 32:802–11. doi: 10.3109/0142159X.2010.507716

Crossref Full Text | Google Scholar

27. Elshama, SS. How to apply simulation-based learning in medical education? Iberoam J Med. (2020) 2:79–86.

Google Scholar

28. Hodges, B, Regehr, G, McNaughton, N, Tiberius, R, and Hanson, M. OSCE checklists do not capture increasing levels of expertise. Acad Med J Assoc Am Med Coll. (1999) 74:1129–34. doi: 10.1097/00001888-199910000-00017

Crossref Full Text | Google Scholar

29. Prislin, MD, Fitzpatrick, CF, Lie, D, Giglio, M, Radecki, S, and Lewis, E. Use of an objective structured clinical examination in evaluating student performance. Fam Med. (1998) 30:338–44.

PubMed Abstract | Google Scholar

30. Wallenstein, J, and Ander, D. Objective structured clinical examinations provide valid clinical skills assessment in emergency medicine education. West J Emerg Med. (2015) 16:121–6. doi: 10.5811/westjem.2014.11.22440

PubMed Abstract | Crossref Full Text | Google Scholar

31. Lukas, RV, Adesoye, T, Smith, S, Blood, A, and Brorson, JR. Student assessment by objective structured examination in a neurology clerkship. Neurology. (2012) 79:681–5. doi: 10.1212/WNL.0b013e3182648ba1

PubMed Abstract | Crossref Full Text | Google Scholar

32. Hawkins, RJ, Swanson, B, and Kremer, MJ. An integrative review of factors related to patient satisfaction with general anesthesia care. AORN J. (2012) 96:368–76. doi: 10.1016/j.aorn.2012.07.015

PubMed Abstract | Crossref Full Text | Google Scholar

33. Lozada, MJ, Nguyen, JTC, Abouleish, A, Prough, D, and Przkora, R. Patient preference for the pre-anesthesia evaluation: telephone versus in-office assessment. J Clin Anesth. (2016) 31:145–8. doi: 10.1016/j.jclinane.2015.12.040

PubMed Abstract | Crossref Full Text | Google Scholar

34. Bondy, LR, Sims, N, Schroeder, DR, Offord, KP, and Narr, BJ. The effect of anesthetic patient education on preoperative patient anxiety. Reg Anesth Pain Med. (1999) 24:158–64.

PubMed Abstract | Google Scholar

35. Collins, LG, Schrimmer, A, Diamond, J, and Burke, J. Evaluating verbal and non-verbal communication skills, in an ethnogeriatric OSCE. Patient Educ Couns. (2011) 83:158–62. doi: 10.1016/j.pec.2010.05.012

PubMed Abstract | Crossref Full Text | Google Scholar

36. Ishikawa, H, Hashimoto, H, Kinoshita, M, Fujimori, S, Shimizu, T, and Yano, E. Evaluating medical students’ non-verbal communication during the objective structured clinical examination. Med Educ. (2006) 40:1180–7. doi: 10.1111/j.1365-2929.2006.02628.x

PubMed Abstract | Crossref Full Text | Google Scholar

37. O’Sullivan, P, Chao, S, Russell, M, Levine, S, and Fabiny, A. Development and implementation of an objective structured clinical examination to provide formative feedback on communication and interpersonal skills in geriatric training. J Am Geriatr Soc. (2008) 56:1730–5. doi: 10.1111/j.1532-5415.2008.01860.x

Crossref Full Text | Google Scholar

38. Kumar, M, and Chawla, R. Communication skills and anesthesiologists. Anesth Essays Res. (2013) 7:145–6. doi: 10.4103/0259-1162.118938

PubMed Abstract | Crossref Full Text | Google Scholar

39. Harms, C, Young, JR, Amsler, F, Zettler, C, Scheidegger, D, and Kindler, CH. Improving anaesthetists’ communication skills. Anaesthesia. (2004) 59:166–72. doi: 10.1111/j.1365-2044.2004.03528.x

Crossref Full Text | Google Scholar

40. Muca, E, Molino, M, Ghislieri, C, Baratta, M, Odore, R, Bergero, D, et al. Relationships between psychological characteristics, academic fit and engagement with academic performance in veterinary medical students. BMC Vet Res. (2023) 19:132. doi: 10.1186/s12917-023-03695-0

PubMed Abstract | Crossref Full Text | Google Scholar

41. Khosravi Khorashad, A, Salari, S, Baharvahdat, H, Hejazi, S, Lari, SM, Salari, M, et al. The assessment of undergraduate medical students’ satisfaction levels with the objective structured clinical examination. Iran Red Crescent Med J. (2014) 16:e13088. doi: 10.5812/ircmj.13088

Crossref Full Text | Google Scholar

42. Nasir, AA, Yusuf, AS, Abdur-Rahman, LO, Babalola, OM, Adeyeye, AA, Popoola, AA, et al. Medical students’ perception of objective structured clinical examination: a feedback for process improvement. J Surg Educ. (2014) 71:701–6. doi: 10.1016/j.jsurg.2014.02.010

PubMed Abstract | Crossref Full Text | Google Scholar

43. Raheel, H, and Naeem, N. Assessing the objective structured clinical examination: Saudi family medicine undergraduate medical students’ perceptions of the tool. JPMA J Pak Med Assoc. (2013) 63:1281–4.

Google Scholar

44. Jindal, P, and Khurana, G. The opinion of post graduate students on objective structured clinical examination in Anaesthesiology: a preliminary report. Indian J Anaesth. (2016) 60:168–73. doi: 10.4103/0019-5049.177869

PubMed Abstract | Crossref Full Text | Google Scholar

45. Simmonds, M, and Petterson, J. Anaesthetists’ records of pre-operative assessment. Clin Perform Qual Health Care. (2000) 8:22–7. doi: 10.1108/14664100010332964

PubMed Abstract | Crossref Full Text | Google Scholar

46. Ausset, S, Bouaziz, H, Brosseau, M, Kinirons, B, and Benhamou, D. Improvement of information gained from the pre-anaesthetic visit through a quality-assurance programme. Br J Anaesth. (2002) 88:280–3. doi: 10.1093/bja/88.2.280

PubMed Abstract | Crossref Full Text | Google Scholar

47. Committee on Standards and Practice ParametersApfelbaum, JL, Connis, RT, Nickinovich, DG, American Society of Anesthesiologists Task Force on Preanesthesia EvaluationPasternak, LR, et al. Practice advisory for preanesthesia evaluation: an updated report by the American Society of Anesthesiologists Task Force on preanesthesia evaluation. Anesthesiology. (2012) 116:522–38. doi: 10.1097/ALN.0b013e31823c1067

Crossref Full Text | Google Scholar

48. De Oliveira Filho, GR, and Schonhorst, L. The development and application of an instrument for assessing resident competence during preanesthesia consultation. Anesth Analg. (2004) 99:62–9. doi: 10.1213/01.ANE.0000121770.64938.3B

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: simulation, preanesthesia assessment, simulated patients, anesthesia, consultation

Citation: Besnier E, Franchina S, Lefevre-Scelles A, Wable T, Hanouz J-L, Allard E, Dureuil B and Compère V (2024) Evaluating pre-anesthesia assessment performance in residency: the reliability of standardized patient methods. Front. Med. 11:1342004. doi: 10.3389/fmed.2024.1342004

Received: 21 December 2023; Accepted: 26 June 2024;
Published: 01 August 2024.

Edited by:

Damiano Cavallini, University of Bologna, Italy

Reviewed by:

Dervla Kelly, University of Limerick, Ireland
Edlira Muca, University of Turin, Italy

Copyright © 2024 Besnier, Franchina, Lefevre-Scelles, Wable, Hanouz, Allard, Dureuil and Compère. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Vincent Compère, VmluY2VudC5Db21wZXJlQGNodS1yb3Vlbi5mcg==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.