Try a new search

Format these results:

Searched for:

in-biosketch:true

person:burkrj01

Total Results:

40


Identifying Meaningful Patterns of Internal Medicine Clerkship Grading Distributions: Application of Data Science Techniques Across 135 U.S. Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PROBLEM/OBJECTIVE:Residency program directors use clerkship grades for high-stakes selection decisions despite substantial variability in grading systems and distributions. The authors apply clustering techniques from data science to identify groups of schools for which grading distributions were statistically similar in the internal medicine clerkship. APPROACH/METHODS:Grading systems (e.g., honors/pass/fail) and distributions (i.e., percent of students in each grade tier) were tabulated for the internal medicine clerkship at U.S. MD-granting medical schools by manually reviewing Medical Student Performance Evaluations (MSPEs) in the 2019 and 2020 residency application cycles. Grading distributions were analyzed using k-means cluster analysis, with the optimal number of clusters selected using model fit indices. OUTCOMES/RESULTS:Among the 145 medical schools with available MSPE data, 64 distinct grading systems were reported. Among the 135 schools reporting a grading distribution, the median percent of students receiving the highest and lowest tier grade was 32% (range: 2%-66%) and 2% (range: 0%-91%), respectively. Four clusters was the most optimal solution (η2 = 0.8): cluster 1 (45% [highest grade tier]-45% [middle tier]-10% [lowest tier], n = 64 [47%] schools), cluster 2 (25%-30%-45%, n = 40 [30%] schools), cluster 3 (20%-75%-5%, n = 25 [19%] schools), and cluster 4 (15%-25%-25%-25%-10%, n = 6 [4%] schools). The findings suggest internal medicine clerkship grading systems may be more comparable across institutions than previously thought. NEXT STEPS/CONCLUSIONS:The authors will prospectively review reported clerkship grading approaches across additional specialties and are conducting a mixed-methods analysis, incorporating a sequential explanatory model, to interview stakeholder groups on the use of the patterns identified.
PMID: 36484555
ISSN: 1938-808x
CID: 5378842

The Undergraduate to Graduate Medical Education Transition as a Systems Problem: A Root Cause Analysis

Swails, Jennifer L; Angus, Steven; Barone, Michael A; Bienstock, Jessica; Burk-Rafel, Jesse; Roett, Michelle A; Hauer, Karen E
The transition from undergraduate medical education (UME) to graduate medical education (GME) constitutes a complex system with important implications for learner progression and patient safety. The transition is currently dysfunctional, requiring students and residency programs to spend significant time, money, and energy on the process. Applications and interviews continue to increase despite stable match rates. Although many in the medical community acknowledge the problems with the UME-GME transition and learners have called for prompt action to address these concerns, the underlying causes are complex and have defied easy fixes. This article describes the work of the Coalition for Physician Accountability's Undergraduate Medical Education to Graduate Medical Education Review Committee (UGRC) to apply a quality improvement approach and systems thinking to explore the underlying causes of dysfunction in the UME-GME transition. The UGRC performed a root cause analysis using the 5 whys and an Ishikawa (or fishbone) diagram to deeply explore problems in the UME-GME transition. The root causes of problems identified include culture, costs and limited resources, bias, systems, lack of standards, and lack of alignment. Using the principles of systems thinking (components, connections, and purpose), the UGRC considered interactions among the root causes and developed recommendations to improve the UME-GME transition. Several of the UGRC's recommendations stemming from this work are explained. Sustained monitoring will be necessary to ensure interventions move the process forward to better serve applicants, programs, and the public good.
PMID: 36538695
ISSN: 1938-808x
CID: 5426192

Reimagining the Transition to Residency: A Trainee Call to Accelerated Action

Lin, Grant L; Guerra, Sylvia; Patel, Juhee; Burk-Rafel, Jesse
The transition from medical student to resident is a pivotal step in the medical education continuum. For applicants, successfully obtaining a residency position is the actualization of a dream after years of training and has life-changing professional and financial implications. These high stakes contribute to a residency application and Match process in the United States that is increasingly complex and dysfunctional, and that does not effectively serve applicants, residency programs, or the public good. In July 2020, the Coalition for Physician Accountability (Coalition) formed the Undergraduate Medical Education-Graduate Medical Education Review Committee (UGRC) to critically assess the overall transition to residency and offer recommendations to solve the growing challenges in the system. In this Invited Commentary, the authors reflect on their experience as the trainee representatives on the UGRC. They emphasize the importance of trainee advocacy in medical education change efforts; reflect on opportunities, concerns, and tensions with the final UGRC recommendations (released in August 2021); discuss factors that may constrain implementation; and call for the medical education community-and the Coalition member organizations in particular-to accelerate fully implementing the UGRC recommendations. By seizing the momentum created by the UGRC, the medical education community can create a reimagined transition to residency that reshapes its approach to training a more diverse, competent, and growth-oriented physician workforce.
PMID: 35263298
ISSN: 1938-808x
CID: 5220952

TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs): A Scalable Approach for Linking Education to Patient Care

Burk-Rafel, Jesse; Sebok-Syer, Stefanie S; Santen, Sally A; Jiang, Joshua; Caretta-Weyer, Holly A; Iturrate, Eduardo; Kelleher, Matthew; Warm, Eric J; Schumacher, Daniel J; Kinnear, Benjamin
Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.
PMCID:10198229
PMID: 37215538
ISSN: 2212-277x
CID: 5503722

Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score

Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 37460518
ISSN: 1938-808x
CID: 5535552

Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 37460502
ISSN: 1938-808x
CID: 5535542

Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score

Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 36287705
ISSN: 1938-808x
CID: 5358032

Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 36287686
ISSN: 1938-808x
CID: 5358022

Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation

Schaye, Verity; Guzman, Benedict; Burk-Rafel, Jesse; Marin, Marina; Reinstein, Ilan; Kudlowitz, David; Miller, Louis; Chun, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND:Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE:The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES/UNASSIGNED:Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS/RESULTS:The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS:The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
PMCID:9296753
PMID: 35710676
ISSN: 1525-1497
CID: 5277902

Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback

Schaye, Verity; Miller, Louis; Kudlowitz, David; Chun, Jonathan; Burk-Rafel, Jesse; Cocks, Patrick; Guzman, Benedict; Aphinyanaphongs, Yindalon; Marin, Marina
BACKGROUND:Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE:Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES/UNASSIGNED:The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS/RESULTS:The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS:The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.
PMID: 33945113
ISSN: 1525-1497
CID: 4866222