Searched for: in-biosketch:true
person:burkrj01
A Theoretical Foundation to Inform the Implementation of Precision Education and Assessment
Drake, Carolyn B; Heery, Lauren M; Burk-Rafel, Jesse; Triola, Marc M; Sartori, Daniel J
Precision education (PE) uses personalized educational interventions to empower trainees and improve learning outcomes. While PE has the potential to represent a paradigm shift in medical education, a theoretical foundation to guide the effective implementation of PE strategies has not yet been described. Here, the authors introduce a theoretical foundation for the implementation of PE, integrating key learning theories with the digital tools that allow them to be operationalized. Specifically, the authors describe how the master adaptive learner (MAL) model, transformative learning theory, and self-determination theory can be harnessed in conjunction with nudge strategies and audit and feedback dashboards to drive learning and meaningful behavior change. The authors also provide practical examples of these theories and tools in action by describing precision interventions already in use at one academic medical center, concretizing PE's potential in the current clinical environment. These examples illustrate how a firm theoretical grounding allows educators to most effectively tailor PE interventions to fit individual learners' needs and goals, facilitating efficient learning and, ultimately, improving patient and health system outcomes.
PMID: 38113440
ISSN: 1938-808x
CID: 5612362
Foreword: The Next Era of Assessment and Precision Education
Schumacher, Daniel J; Santen, Sally A; Pugh, Carla M; Burk-Rafel, Jesse
PMID: 38109655
ISSN: 1938-808x
CID: 5612462
Leveraging Electronic Health Record Data and Measuring Interdependence in the Era of Precision Education and Assessment
Sebok-Syer, Stefanie S; Small, William R; Lingard, Lorelei; Glober, Nancy K; George, Brian C; Burk-Rafel, Jesse
PURPOSE:The era of precision education is increasingly leveraging electronic health record (EHR) data to assess residents' clinical performance. But precision in what the EHR-based resident performance metrics are truly assessing is not fully understood. For instance, there is limited understanding of how EHR-based measures account for the influence of the team on an individual's performance-or conversely how an individual contributes to team performances. This study aims to elaborate on how the theoretical understandings of supportive and collaborative interdependence are captured in residents' EHR-based metrics. METHOD:Using a mixed methods study design, the authors conducted a secondary analysis of 5 existing quantitative and qualitative datasets used in previous EHR studies to investigate how aspects of interdependence shape the ways that team-based care is provided to patients. RESULTS:Quantitative analyses of 16 EHR-based metrics found variability in faculty and resident performance (both between and within resident). Qualitative analyses revealed that faculty lack awareness of their own EHR-based performance metrics, which limits their ability to act interdependently with residents in an evidence-informed fashion. The lens of interdependence elucidates how resident practice patterns develop across residency training, shifting from supportive to collaborative interdependence over time. Joint displays merging the quantitative and qualitative analyses showed that residents are aware of variability in faculty's practice patterns and that viewing resident EHR-based measures without accounting for the interdependence of residents with faculty is problematic, particularly within the framework of precision education. CONCLUSIONS:To prepare for this new paradigm of precision education, educators need to develop and evaluate theoretically robust models that measure interdependence in EHR-based metrics, affording more nuanced interpretation of such metrics when assessing residents throughout training.
PMID: 38207084
ISSN: 1938-808x
CID: 5686572
A New Tool for Holistic Residency Application Review: Using Natural Language Processing of Applicant Experiences to Predict Interview Invitation
Mahtani, Arun Umesh; Reinstein, Ilan; Marin, Marina; Burk-Rafel, Jesse
PROBLEM:Reviewing residency application narrative components is time intensive and has contributed to nearly half of applications not receiving holistic review. The authors developed a natural language processing (NLP)-based tool to automate review of applicants' narrative experience entries and predict interview invitation. APPROACH:Experience entries (n = 188,500) were extracted from 6,403 residency applications across 3 application cycles (2017-2019) at 1 internal medicine program, combined at the applicant level, and paired with the interview invitation decision (n = 1,224 invitations). NLP identified important words (or word pairs) with term frequency-inverse document frequency, which were used to predict interview invitation using logistic regression with L1 regularization. Terms remaining in the model were analyzed thematically. Logistic regression models were also built using structured application data and a combination of NLP and structured data. Model performance was evaluated on never-before-seen data using area under the receiver operating characteristic and precision-recall curves (AUROC, AUPRC). OUTCOMES:The NLP model had an AUROC of 0.80 (vs chance decision of 0.50) and AUPRC of 0.49 (vs chance decision of 0.19), showing moderate predictive strength. Phrases indicating active leadership, research, or work in social justice and health disparities were associated with interview invitation. The model's detection of these key selection factors demonstrated face validity. Adding structured data to the model significantly improved prediction (AUROC 0.92, AUPRC 0.73), as expected given reliance on such metrics for interview invitation. NEXT STEPS:This model represents a first step in using NLP-based artificial intelligence tools to promote holistic residency application review. The authors are assessing the practical utility of using this model to identify applicants screened out using traditional metrics. Generalizability must be determined through model retraining and evaluation at other programs. Work is ongoing to thwart model "gaming," improve prediction, and remove unwanted biases introduced during model training.
PMID: 36940395
ISSN: 1938-808x
CID: 5708082
Precision Medical Education
Triola, Marc M; Burk-Rafel, Jesse
Medical schools and residency programs are increasingly incorporating personalization of content, pathways, and assessments to align with a competency-based model. Yet, such efforts face challenges involving large amounts of data, sometimes struggling to deliver insights in a timely fashion for trainees, coaches, and programs. In this article, the authors argue that the emerging paradigm of precision medical education (PME) may ameliorate some of these challenges. However, PME lacks a widely accepted definition and a shared model of guiding principles and capacities, limiting widespread adoption. The authors propose defining PME as a systematic approach that integrates longitudinal data and analytics to drive precise educational interventions that address each individual learner's needs and goals in a continuous, timely, and cyclical fashion, ultimately improving meaningful educational, clinical, or system outcomes. Borrowing from precision medicine, they offer an adapted shared framework. In the P4 medical education framework, PME should (1) take a proactive approach to acquiring and using trainee data; (2) generate timely personalized insights through precision analytics (including artificial intelligence and decision-support tools); (3) design precision educational interventions (learning, assessment, coaching, pathways) in a participatory fashion, with trainees at the center as co-producers; and (4) ensure interventions are predictive of meaningful educational, professional, or clinical outcomes. Implementing PME will require new foundational capacities: flexible educational pathways and programs responsive to PME-guided dynamic and competency-based progression; comprehensive longitudinal data on trainees linked to educational and clinical outcomes; shared development of requisite technologies and analytics to effect educational decision-making; and a culture that embraces a precision approach, with research to gather validity evidence for this approach and development efforts targeting new skills needed by learners, coaches, and educational leaders. Anticipating pitfalls in the use of this approach will be important, as will ensuring it deepens, rather than replaces, the interaction of trainees and their coaches.
PMID: 37027222
ISSN: 1938-808x
CID: 5537182
Identifying Meaningful Patterns of Internal Medicine Clerkship Grading Distributions: Application of Data Science Techniques Across 135 U.S. Medical Schools
Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PROBLEM/OBJECTIVE:Residency program directors use clerkship grades for high-stakes selection decisions despite substantial variability in grading systems and distributions. The authors apply clustering techniques from data science to identify groups of schools for which grading distributions were statistically similar in the internal medicine clerkship. APPROACH/METHODS:Grading systems (e.g., honors/pass/fail) and distributions (i.e., percent of students in each grade tier) were tabulated for the internal medicine clerkship at U.S. MD-granting medical schools by manually reviewing Medical Student Performance Evaluations (MSPEs) in the 2019 and 2020 residency application cycles. Grading distributions were analyzed using k-means cluster analysis, with the optimal number of clusters selected using model fit indices. OUTCOMES/RESULTS:Among the 145 medical schools with available MSPE data, 64 distinct grading systems were reported. Among the 135 schools reporting a grading distribution, the median percent of students receiving the highest and lowest tier grade was 32% (range: 2%-66%) and 2% (range: 0%-91%), respectively. Four clusters was the most optimal solution (η2 = 0.8): cluster 1 (45% [highest grade tier]-45% [middle tier]-10% [lowest tier], n = 64 [47%] schools), cluster 2 (25%-30%-45%, n = 40 [30%] schools), cluster 3 (20%-75%-5%, n = 25 [19%] schools), and cluster 4 (15%-25%-25%-25%-10%, n = 6 [4%] schools). The findings suggest internal medicine clerkship grading systems may be more comparable across institutions than previously thought. NEXT STEPS/CONCLUSIONS:The authors will prospectively review reported clerkship grading approaches across additional specialties and are conducting a mixed-methods analysis, incorporating a sequential explanatory model, to interview stakeholder groups on the use of the patterns identified.
PMID: 36484555
ISSN: 1938-808x
CID: 5378842
Reimagining the Transition to Residency: A Trainee Call to Accelerated Action
Lin, Grant L; Guerra, Sylvia; Patel, Juhee; Burk-Rafel, Jesse
The transition from medical student to resident is a pivotal step in the medical education continuum. For applicants, successfully obtaining a residency position is the actualization of a dream after years of training and has life-changing professional and financial implications. These high stakes contribute to a residency application and Match process in the United States that is increasingly complex and dysfunctional, and that does not effectively serve applicants, residency programs, or the public good. In July 2020, the Coalition for Physician Accountability (Coalition) formed the Undergraduate Medical Education-Graduate Medical Education Review Committee (UGRC) to critically assess the overall transition to residency and offer recommendations to solve the growing challenges in the system. In this Invited Commentary, the authors reflect on their experience as the trainee representatives on the UGRC. They emphasize the importance of trainee advocacy in medical education change efforts; reflect on opportunities, concerns, and tensions with the final UGRC recommendations (released in August 2021); discuss factors that may constrain implementation; and call for the medical education community-and the Coalition member organizations in particular-to accelerate fully implementing the UGRC recommendations. By seizing the momentum created by the UGRC, the medical education community can create a reimagined transition to residency that reshapes its approach to training a more diverse, competent, and growth-oriented physician workforce.
PMID: 35263298
ISSN: 1938-808x
CID: 5220952
The Undergraduate to Graduate Medical Education Transition as a Systems Problem: A Root Cause Analysis
Swails, Jennifer L; Angus, Steven; Barone, Michael A; Bienstock, Jessica; Burk-Rafel, Jesse; Roett, Michelle A; Hauer, Karen E
The transition from undergraduate medical education (UME) to graduate medical education (GME) constitutes a complex system with important implications for learner progression and patient safety. The transition is currently dysfunctional, requiring students and residency programs to spend significant time, money, and energy on the process. Applications and interviews continue to increase despite stable match rates. Although many in the medical community acknowledge the problems with the UME-GME transition and learners have called for prompt action to address these concerns, the underlying causes are complex and have defied easy fixes. This article describes the work of the Coalition for Physician Accountability's Undergraduate Medical Education to Graduate Medical Education Review Committee (UGRC) to apply a quality improvement approach and systems thinking to explore the underlying causes of dysfunction in the UME-GME transition. The UGRC performed a root cause analysis using the 5 whys and an Ishikawa (or fishbone) diagram to deeply explore problems in the UME-GME transition. The root causes of problems identified include culture, costs and limited resources, bias, systems, lack of standards, and lack of alignment. Using the principles of systems thinking (components, connections, and purpose), the UGRC considered interactions among the root causes and developed recommendations to improve the UME-GME transition. Several of the UGRC's recommendations stemming from this work are explained. Sustained monitoring will be necessary to ensure interventions move the process forward to better serve applicants, programs, and the public good.
PMID: 36538695
ISSN: 1938-808x
CID: 5426192
TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs): A Scalable Approach for Linking Education to Patient Care
Burk-Rafel, Jesse; Sebok-Syer, Stefanie S; Santen, Sally A; Jiang, Joshua; Caretta-Weyer, Holly A; Iturrate, Eduardo; Kelleher, Matthew; Warm, Eric J; Schumacher, Daniel J; Kinnear, Benjamin
Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.
PMCID:10198229
PMID: 37215538
ISSN: 2212-277x
CID: 5503722
Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools
Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 36287686
ISSN: 1938-808x
CID: 5358022