Strategies to support self-regulated learning in integrated, student-centered curricula
PURPOSE/UNASSIGNED:With undergraduate medical education shifting to an integrated, student-centered approach, self-regulated learning (SRL) skills are critical for student success. Educational research holds that learning strategy effectiveness is context dependent. Our study aims to explore what strategies medical students use to support SRL when engaged in the specific context of an integrated, student-centered curriculum. APPROACH/UNASSIGNED:This study took place in two medical schools with integrated, student-centered curricula. Semi-structured interviews were conducted with first-year medical students from both schools, asking them to reflect on the learning strategies they used throughout their first year of medical school. Interview data was analyzed first deductively using the SRL framework and then inductively to understand the specific strategies being used. FINDINGS/UNASSIGNED:Students engaged in strategies to support SRL in ways that were unique to the integrated, student-centered context. We found that medical students developed strategies to plan for integration and building connections across material during all three phases of self-regulated learning. INSIGHTS/UNASSIGNED:By identifying specific tasks and behaviors students utilized during their first year of medical school, this study provides a roadmap that students and educators can use to help students become self-regulated learners.
Exploring the impact of postponing core clerkships on future performance
Despite the many clerkship models of medical education, all can be considered a form of experiential learning. Experiential learning is a complex pedagogical approach involving the development of cognitive skills in an environment with a unique culture with multiple stakeholders, which may impact learner motivation, confidence, and other noncognitive drivers of success. Students may delay the transition to the clerkship year for myriad reasons, and the intricate nature of experiential learning suggested this may impact student performance. This retrospective, observational study investigated the impact of clerkship postponement by measuring subsequent clerkship performance. Pre-clerkship and third-year clerkship performance were analyzed for three cohorts of students (classes of 2018, 2019, and 2020, N = 274) where students had the option to delay the start of their clerkship year. A mixed analysis of variance (ANOVA) and paired t-tests were conducted to compare academic performance over time among students who did and did not delay. Across three cohorts of students, 12% delayed the start of the clerkship year (N = 33). Regardless of prior academic performance, these students experienced a significant reduction in clerkship grades compared to their non-delaying peers. Delaying the start of the clerkship year may have negative durable effects on future academic performance. This information should be kept in mind for student advisement.
A Multi-institutional Study of the Feasibility and Reliability of the Implementation of Constructed Response Exam Questions
PROBLEM/UNASSIGNED:Some medical schools have incorporated constructed response short answer questions (CR-SAQs) into their assessment toolkits. Although CR-SAQs carry benefits for medical students and educators, the faculty perception that the amount of time required to create and score CR-SAQs is not feasible and concerns about reliable scoring may impede the use of this assessment type in medical education. INTERVENTION/UNASSIGNED:) was used to evaluate inter-rater reliability. CONTEXT/UNASSIGNED:This research study was implemented at three US medical schools that are nationally dispersed and have been administering CR-SAQ summative exams as part of their programs of assessment for at least five years. The study exam question was included in an end-of-course summative exam during the first year of medical school. IMPACT/UNASSIGNED:=.59-.66, analytic rubric). LESSONS LEARNED/UNASSIGNED:Our findings show that from the faculty perspective it is feasible to include CR-SAQs in summative exams and we provide practical information for medical educators creating and scoring CR-SAQs. We also learned that CR-SAQs can be reliably scored by faculty without content expertise or senior medical students using an analytic rubric, or by senior medical students using a holistic rubric, which provides options to alleviate the faculty burden associated with grading CR-SAQs.
What Behaviors Define a Good Physician? Assessing and Communicating About Noncognitive Skills
Once medical students attain a certain level of medical knowledge, success in residency often depends on noncognitive attributes, such as conscientiousness, empathy, and grit. These traits are significantly more difficult to assess than cognitive performance, creating a potential gap in measurement. Despite its promise, competency-based medical education (CBME) has yet to bridge this gap, partly due to a lack of well-defined noncognitive observable behaviors that assessors and educators can use in formative and summative assessment. As a result, typical undergraduate to graduate medical education handovers stress standardized test scores, and program directors trust little of the remaining information they receive, sometimes turning to third-party companies to better describe potential residency candidates. The authors have created a list of noncognitive attributes, with associated definitions and noncognitive skills-called observable practice activities (OPAs)-written for learners across the continuum to help educators collect assessment data that can be turned into valuable information. OPAs are discrete work-based assessment elements collected over time and mapped to larger structures, such as milestones, entrustable professional activities, or competencies, to create learning trajectories for formative and summative decisions. Medical schools and graduate medical education programs could adapt these OPAs or determine ways to create new ones specific to their own contexts. Once OPAs are created, programs will have to find effective ways to assess them, interpret the data, determine consequence validity, and communicate information to learners and institutions. The authors discuss the need for culture change surrounding assessment-even for the adoption of behavior-based tools such as OPAs-including grounding the work in a growth mindset and the broad underpinnings of CBME. Ultimately, improving assessment of noncognitive capacity should benefit learners, schools, programs, and most importantly, patients.
A Generalizable Approach to Predicting Performance on USMLE Step 2 CK
INTRODUCTION/UNASSIGNED:The elimination of the USMLE Step 1 three-digit score has created a deficit in standardized performance metrics for undergraduate medical educators and residency program directors. It is likely that there will be greater emphasis on USMLE Step 2 CK, an exam found to be associated with later clinical performance in residents and physicians. Because many previous models relied on Step 1 scores to predict student performance on Step 2 CK, we developed a model using other metrics. MATERIALS AND METHODS/UNASSIGNED:Assessment data for 228 students in three cohorts (classes of 2018, 2019, and 2020) were collected, including the Medical College Admission Test (MCAT), NBME Customized Assessment Service (CAS) exams and NBME Subject exams. A linear regression model was conducted to predict Step 2 CK scores at five time-points: at the end of years one and two and at three trimester intervals in year three. An additional cohort (class of 2021) was used to validate the model. RESULTS/UNASSIGNED:= 0.62). Including Step 1 scores did not significantly improve the final model. Using metrics from the class of 2021, the model predicted Step 2 CK performance within a mean square error (MSE) of 8.3 points (SD = 6.8) at the end of year 1 increasing predictability incrementally to within a mean of 5.4 points (SD = 4.1) by the end of year 3. CONCLUSION/UNASSIGNED:This model is highly generalizable and enables medical educators to predict student performance on Step 2 CK in the absence of Step 1 quantitative data as early as the end of the first year of medical education with increasingly stronger predictions as students progressed through the clerkship year.
Review of the Medical Student Performance Evaluation: analysis of the end-users' perspective across the specialties
The Medical Student Performance Evaluation (MSPE) is an important tool of communication used by program directors to make decisions in the residency application process. To understand the perspective and usage of the MSPE across multiple medical specialties now and in anticipation of the planned changes in USMLE Step 1 score-reporting. A survey instrument including quantitative and qualitative measures was developed and piloted. The final survey was distributed to residency programs across 28 specialties in 2020 via the main contact on the ACGME listserv. Of the 28 specialties surveyed, at least one response was received from 26 (93%). Eight percent of all programs (364/4675) responded to the survey, with most respondents being program directors. Usage of the MSPE varied among specialties. Approximately 1/3 of end-users stated that the MSPE is very or extremely influential in their initial screening process. Slightly less than half agreed or strongly agreed that they trust the information to be an accurate representation of applicants, though slightly more than half agree that the MSPE will become more influential once USMLE Step 1 becomes pass/fail. Professionalism was rated as the most important component and noteworthy characteristics among the least important in the decision-making process. Performance in the internal medicine clerkship was rated as the most influential while neurology and psychiatry performances were rated as less influential. Overwhelmingly, respondents suggested that including comparative performance and/or class rank would make the MSPE more useful once USMLE Step 1 becomes pass/fail. MSPE end-users across a variety of specialties utilize this complex document in different ways and value it differentially in their decision-making processes. Despite this, continued mistrust of the MSPE persists. A better understanding of end-users' perceptions of the MSPE offers the UME community an opportunity to transform the MSPE into a highly valued, trusted document of communication.
Can Content Experts Rely on Others to Reliably Score Open-Ended Questions on Summative Exams?
Current State of the Medical Student Performance Evaluation: A Tool for Reflection for Residency Programs
BACKGROUND:The Medical Student Performance Evaluation (MSPE) provides important information to residency programs. Despite recent recommendations for standardization, it is not clear how much variation exists in MSPE content among schools. OBJECTIVES/OBJECTIVE:We describe the current section content of the MSPE in US allopathic medical schools, with a particular focus on variations in the presentation of student performance. METHODS:A representative MSPE was obtained from 95.3% (143 of 150) of allopathic US medical schools through residency applications to the Zucker School of Medicine at Hofstra/Northwell in select programs for the 2019-2020 academic year. A manual data abstraction tool was piloted in 2018-2019. After training, it was used to code all portions of the MSPE in this study. The results were analyzed, and descriptive statistics were reported. RESULTS:In preclinical years, 30.8% of MSPEs reported data regarding performance of students beyond achieving "passes" in a pass/fail curriculum. Only half referenced performance in the fourth year including electives, acting internships, or both. About two-thirds of schools included an overall descriptor of comparative performance in the final paragraph. Among these schools, a majority provided adjectives such as "outstanding/excellent/very good/good," while one-quarter reported numerical data categories. Regarding clerkship grades, there were numerous nomenclature systems used. CONCLUSIONS:This analysis demonstrates the existence of extreme variability in the content of MSPEs submitted by US allopathic medical schools in the 2019-2020 cycle, including the components and nomenclature of grades and descriptors of comparative performance, display of data, and inclusion of data across all years of the medical education program.
Changing Medical Education, Overnight: The Curricular Response to COVID-19 of Nine Medical Schools
Issue: Calls to change medical education have been frequent, persistent, and generally limited to alterations in content or structural re-organization. Self-imposed barriers have prevented adoption of more radical pedagogical approaches, so recent predictions of the 'inevitability' of medical education transitioning to online delivery seemed unlikely. Then in March 2020 the COVID-19 pandemic forced medical schools to overcome established barriers overnight and make the most rapid curricular shift in medical education's history. We share the collated reports of nine medical schools and postulate how recent responses may influence future medical education. Evidence: While extraneous pandemic-related factors make it impossible to scientifically distinguish the impact of the curricular changes, some themes emerged. The rapid transition to online delivery was made possible by all schools having learning management systems and key electronic resources already blended into their curricula; we were closer to online delivery than anticipated. Student engagement with online delivery varied with different pedagogies used and the importance of social learning and interaction along with autonomy in learning were apparent. These are factors known to enhance online learning, and the student-centered modalities (e.g. problem-based learning) that included them appeared to be more engaging. Assumptions that the new online environment would be easily adopted and embraced by 'technophilic' students did not always hold true. Achieving true distance medical education will take longer than this 'overnight' response, but adhering to best practices for online education may open a new realm of possibilities. Implications: While this experience did not confirm that online medical education is really 'inevitable,' it revealed that it is possible. Thoughtfully blending more online components into a medical curriculum will allow us to take advantage of this environment's strengths such as efficiency and the ability to support asynchronous and autonomous learning that engage and foster intrinsic learning in our students. While maintaining aspects of social interaction, online learning could enhance pre-clinical medical education by allowing integration and collaboration among classes of medical students, other health professionals, and even between medical schools. What remains to be seen is whether COVID-19 provided the experience, vision and courage for medical education to change, or whether the old barriers will rise again when the pandemic is over.
Third year medical students impersonalize and hedge when providing negative upward feedback to clinical faculty
Medical students provide clinical teaching faculty with feedback on their skills as educators through anonymous surveys at the end of their clerkship rotation. Because faculty are in a position of power, students are hesitant to provide candid feedback. Our objective was to determine if medical students were willing to provide negative upward feedback to clinical faculty and describe how they conveyed their feedback. A qualitative analysis of third year medical students' open-ended comments from evaluations of six clerkships was performed using politeness theory as a conceptual framework. Students were asked to describe how the clerkship enhanced their learning and how it could be improved. Midway through the academic year, instructions to provide full names of faculty/residents was added. Overall, there were significantly more comments on what worked well than suggestions for improvement regarding faculty/residents. Instructing students to name-names increased the rate of naming from 35% to 75% for what worked well and from 13% to 39% for suggestions for improvement. Hedging language was included in 61% of suggestions for improvement, but only 2% of what worked well. Students described the variability of their experience, used passive language and qualified negative experiences with positive ones. Medical students may use linguistic strategies, such as impersonalizing and hedging, to mitigate the impact of negative upward feedback. Working towards a culture that supports upward feedback would allow students to feel more comfortable providing candid comments about their experience.