Searched for: person:burkrj01
in-biosketch:true
Precision Medical Education
Triola, Marc M; Burk-Rafel, Jesse
Medical schools and residency programs are increasingly incorporating personalization of content, pathways, and assessments to align with a competency-based model. Yet, such efforts face challenges involving large amounts of data, sometimes struggling to deliver insights in a timely fashion for trainees, coaches, and programs. In this article, the authors argue that the emerging paradigm of precision medical education (PME) may ameliorate some of these challenges. However, PME lacks a widely accepted definition and a shared model of guiding principles and capacities, limiting widespread adoption. The authors propose defining PME as a systematic approach that integrates longitudinal data and analytics to drive precise educational interventions that address each individual learner's needs and goals in a continuous, timely, and cyclical fashion, ultimately improving meaningful educational, clinical, or system outcomes. Borrowing from precision medicine, they offer an adapted shared framework. In the P4 medical education framework, PME should (1) take a proactive approach to acquiring and using trainee data; (2) generate timely personalized insights through precision analytics (including artificial intelligence and decision-support tools); (3) design precision educational interventions (learning, assessment, coaching, pathways) in a participatory fashion, with trainees at the center as co-producers; and (4) ensure interventions are predictive of meaningful educational, professional, or clinical outcomes. Implementing PME will require new foundational capacities: flexible educational pathways and programs responsive to PME-guided dynamic and competency-based progression; comprehensive longitudinal data on trainees linked to educational and clinical outcomes; shared development of requisite technologies and analytics to effect educational decision-making; and a culture that embraces a precision approach, with research to gather validity evidence for this approach and development efforts targeting new skills needed by learners, coaches, and educational leaders. Anticipating pitfalls in the use of this approach will be important, as will ensuring it deepens, rather than replaces, the interaction of trainees and their coaches.
PMID: 37027222
ISSN: 1938-808x
CID: 5537182
Development and Validation of a Machine Learning-Based Decision Support Tool for Residency Applicant Screening and Review
Burk-Rafel, Jesse; Reinstein, Ilan; Feng, James; Kim, Moosun Brad; Miller, Louis H; Cocks, Patrick M; Marin, Marina; Aphinyanaphongs, Yindalon
PURPOSE:Residency programs face overwhelming numbers of residency applications, limiting holistic review. Artificial intelligence techniques have been proposed to address this challenge but have not been created. Here, a multidisciplinary team sought to develop and validate a machine learning (ML)-based decision support tool (DST) for residency applicant screening and review. METHOD:Categorical applicant data from the 2018, 2019, and 2020 residency application cycles (n = 8,243 applicants) at one large internal medicine residency program were downloaded from the Electronic Residency Application Service and linked to the outcome measure: interview invitation by human reviewers (n = 1,235 invites). An ML model using gradient boosting was designed using training data (80% of applicants) with over 60 applicant features (e.g., demographics, experiences, academic metrics). Model performance was validated on held-out data (20% of applicants). Sensitivity analysis was conducted without United States Medical Licensing Examination (USMLE) scores. An interactive DST incorporating the ML model was designed and deployed that provided applicant- and cohort-level visualizations. RESULTS:The ML model areas under the receiver operating characteristic and precision recall curves were 0.95 and 0.76, respectively; these changed to 0.94 and 0.72, respectively, with removal of USMLE scores. Applicants' medical school information was an important driver of predictions-which had face validity based on the local selection process-but numerous predictors contributed. Program directors used the DST in the 2021 application cycle to select 20 applicants for interview that had been initially screened out during human review. CONCLUSIONS:The authors developed and validated an ML algorithm for predicting residency interview offers from numerous application elements with high performance-even when USMLE scores were removed. Model deployment in a DST highlighted its potential for screening candidates and helped quantify and mitigate biases existing in the selection process. Further work will incorporate unstructured textual data through natural language processing methods.
PMID: 34348383
ISSN: 1938-808x
CID: 5050022
The AMA Graduate Profile: Tracking Medical School Graduates Into Practice
Burk-Rafel, Jesse; Marin, Marina; Triola, Marc; Fancher, Tonya; Ko, Michelle; Mejicano, George; Skochelak, Susan; Santen, Sally A; Richardson, Judee
PMID: 34705676
ISSN: 1938-808x
CID: 5042522
Precision Education: The Future of Lifelong Learning in Medicine
Desai, Sanjay V; Burk-Rafel, Jesse; Lomis, Kimberly D; Caverzagie, Kelly; Richardson, Judee; O'Brien, Celia Laird; Andrews, John; Heckman, Kevin; Henderson, David; Prober, Charles G; Pugh, Carla M; Stern, Scott D; Triola, Marc M; Santen, Sally A
The goal of medical education is to produce a physician workforce capable of delivering high-quality equitable care to diverse patient populations and communities. To achieve this aim amidst explosive growth in medical knowledge and increasingly complex medical care, a system of personalized and continuous learning, assessment, and feedback for trainees and practicing physicians is urgently needed. In this perspective, the authors build on prior work to advance a conceptual framework for such a system: precision education (PE).PE is a system that uses data and technology to transform lifelong learning by improving personalization, efficiency, and agency at the individual, program, and organization levels. PE "cycles" start with data inputs proactively gathered from new and existing sources, including assessments, educational activities, electronic medical records, patient care outcomes, and clinical practice patterns. Through technology-enabled analytics, insights are generated to drive precision interventions. At the individual level, such interventions include personalized just-in-time educational programming. Coaching is essential to provide feedback and increase learner participation and personalization. Outcomes are measured using assessment and evaluation of interventions at the individual, program, and organizational level, with ongoing adjustment for repeated cycles of improvement. PE is rooted in patient, health system, and population data; promotes value-based care and health equity; and generates an adaptive learning culture.The authors suggest fundamental principles for PE, including promoting equity in structures and processes, learner agency, and integration with workflow (harmonization). Finally, the authors explore the immediate need to develop consensus-driven standards: rules of engagement between people, products, and entities that interact in these systems to ensure interoperability, data sharing, replicability, and scale of PE innovations.
PMID: 38277444
ISSN: 1938-808x
CID: 5625442
Leveraging Electronic Health Record Data and Measuring Interdependence in the Era of Precision Education and Assessment
Sebok-Syer, Stefanie S; Small, William R; Lingard, Lorelei; Glober, Nancy K; George, Brian C; Burk-Rafel, Jesse
PURPOSE:The era of precision education is increasingly leveraging electronic health record (EHR) data to assess residents' clinical performance. But precision in what the EHR-based resident performance metrics are truly assessing is not fully understood. For instance, there is limited understanding of how EHR-based measures account for the influence of the team on an individual's performance-or conversely how an individual contributes to team performances. This study aims to elaborate on how the theoretical understandings of supportive and collaborative interdependence are captured in residents' EHR-based metrics. METHOD:Using a mixed methods study design, the authors conducted a secondary analysis of 5 existing quantitative and qualitative datasets used in previous EHR studies to investigate how aspects of interdependence shape the ways that team-based care is provided to patients. RESULTS:Quantitative analyses of 16 EHR-based metrics found variability in faculty and resident performance (both between and within resident). Qualitative analyses revealed that faculty lack awareness of their own EHR-based performance metrics, which limits their ability to act interdependently with residents in an evidence-informed fashion. The lens of interdependence elucidates how resident practice patterns develop across residency training, shifting from supportive to collaborative interdependence over time. Joint displays merging the quantitative and qualitative analyses showed that residents are aware of variability in faculty's practice patterns and that viewing resident EHR-based measures without accounting for the interdependence of residents with faculty is problematic, particularly within the framework of precision education. CONCLUSIONS:To prepare for this new paradigm of precision education, educators need to develop and evaluate theoretically robust models that measure interdependence in EHR-based metrics, affording more nuanced interpretation of such metrics when assessing residents throughout training.
PMID: 38207084
ISSN: 1938-808x
CID: 5686572
Foreword: The Next Era of Assessment and Precision Education
Schumacher, Daniel J; Santen, Sally A; Pugh, Carla M; Burk-Rafel, Jesse
PMID: 38109655
ISSN: 1938-808x
CID: 5612462
The Next Era of Assessment: Can Ensuring High-Quality, Equitable Patient Care Be the Defining Characteristic?
Schumacher, Daniel J; Kinnear, Benjamin; Burk-Rafel, Jesse; Santen, Sally A; Bullock, Justin L
Previous eras of assessment in medical education have been defined by how assessment is done, from knowledge exams popularized in the 1960s to the emergence of work-based assessment in the 1990s to current efforts to integrate multiple types and sources of performance data through programmatic assessment. Each of these eras was a response to why assessment was performed (e.g., assessing medical knowledge with exams; assessing communication, professionalism, and systems competencies with work-based assessment). Despite the evolution of assessment eras, current evidence highlights the graduation of trainees with foundational gaps in the ability to provide high-quality care to patients presenting with common problems, and training program leaders report they graduate trainees they would not trust to care for themselves or their loved ones. In this article, the authors argue that the next era of assessment should be defined by why assessment is done: to ensure high-quality, equitable care. Assessment should place focus on demanding graduates possess the knowledge, skills, attitudes, and adaptive expertise to meet the needs of all patients and ensuring that graduates are able to do this in an equitable fashion. The authors explore 2 patient-focused assessment approaches that could help realize the promise of this envisioned era: entrustable professional activities (EPAs) and resident sensitive quality measures (RSQMs)/TRainee Attributable and Automatable Care Evaluations in Real-time (TRACERs). These examples illustrate how the envisioned next era of assessment can leverage existing and new data to provide precision education assessment that focuses on providing formative and summative feedback to trainees in a manner that seeks to ensure their learning outcomes prepare them to ensure high-quality, equitable patient outcomes.
PMID: 38109659
ISSN: 1938-808x
CID: 5612472
A Theoretical Foundation to Inform the Implementation of Precision Education and Assessment
Drake, Carolyn B; Heery, Lauren M; Burk-Rafel, Jesse; Triola, Marc M; Sartori, Daniel J
Precision education (PE) uses personalized educational interventions to empower trainees and improve learning outcomes. While PE has the potential to represent a paradigm shift in medical education, a theoretical foundation to guide the effective implementation of PE strategies has not yet been described. Here, the authors introduce a theoretical foundation for the implementation of PE, integrating key learning theories with the digital tools that allow them to be operationalized. Specifically, the authors describe how the master adaptive learner (MAL) model, transformative learning theory, and self-determination theory can be harnessed in conjunction with nudge strategies and audit and feedback dashboards to drive learning and meaningful behavior change. The authors also provide practical examples of these theories and tools in action by describing precision interventions already in use at one academic medical center, concretizing PE's potential in the current clinical environment. These examples illustrate how a firm theoretical grounding allows educators to most effectively tailor PE interventions to fit individual learners' needs and goals, facilitating efficient learning and, ultimately, improving patient and health system outcomes.
PMID: 38113440
ISSN: 1938-808x
CID: 5612362
Identifying Meaningful Patterns of Internal Medicine Clerkship Grading Distributions: Application of Data Science Techniques Across 135 U.S. Medical Schools
Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PROBLEM/OBJECTIVE:Residency program directors use clerkship grades for high-stakes selection decisions despite substantial variability in grading systems and distributions. The authors apply clustering techniques from data science to identify groups of schools for which grading distributions were statistically similar in the internal medicine clerkship. APPROACH/METHODS:Grading systems (e.g., honors/pass/fail) and distributions (i.e., percent of students in each grade tier) were tabulated for the internal medicine clerkship at U.S. MD-granting medical schools by manually reviewing Medical Student Performance Evaluations (MSPEs) in the 2019 and 2020 residency application cycles. Grading distributions were analyzed using k-means cluster analysis, with the optimal number of clusters selected using model fit indices. OUTCOMES/RESULTS:Among the 145 medical schools with available MSPE data, 64 distinct grading systems were reported. Among the 135 schools reporting a grading distribution, the median percent of students receiving the highest and lowest tier grade was 32% (range: 2%-66%) and 2% (range: 0%-91%), respectively. Four clusters was the most optimal solution (η2 = 0.8): cluster 1 (45% [highest grade tier]-45% [middle tier]-10% [lowest tier], n = 64 [47%] schools), cluster 2 (25%-30%-45%, n = 40 [30%] schools), cluster 3 (20%-75%-5%, n = 25 [19%] schools), and cluster 4 (15%-25%-25%-25%-10%, n = 6 [4%] schools). The findings suggest internal medicine clerkship grading systems may be more comparable across institutions than previously thought. NEXT STEPS/CONCLUSIONS:The authors will prospectively review reported clerkship grading approaches across additional specialties and are conducting a mixed-methods analysis, incorporating a sequential explanatory model, to interview stakeholder groups on the use of the patterns identified.
PMID: 36484555
ISSN: 1938-808x
CID: 5378842
The Undergraduate to Graduate Medical Education Transition as a Systems Problem: A Root Cause Analysis
Swails, Jennifer L; Angus, Steven; Barone, Michael A; Bienstock, Jessica; Burk-Rafel, Jesse; Roett, Michelle A; Hauer, Karen E
The transition from undergraduate medical education (UME) to graduate medical education (GME) constitutes a complex system with important implications for learner progression and patient safety. The transition is currently dysfunctional, requiring students and residency programs to spend significant time, money, and energy on the process. Applications and interviews continue to increase despite stable match rates. Although many in the medical community acknowledge the problems with the UME-GME transition and learners have called for prompt action to address these concerns, the underlying causes are complex and have defied easy fixes. This article describes the work of the Coalition for Physician Accountability's Undergraduate Medical Education to Graduate Medical Education Review Committee (UGRC) to apply a quality improvement approach and systems thinking to explore the underlying causes of dysfunction in the UME-GME transition. The UGRC performed a root cause analysis using the 5 whys and an Ishikawa (or fishbone) diagram to deeply explore problems in the UME-GME transition. The root causes of problems identified include culture, costs and limited resources, bias, systems, lack of standards, and lack of alignment. Using the principles of systems thinking (components, connections, and purpose), the UGRC considered interactions among the root causes and developed recommendations to improve the UME-GME transition. Several of the UGRC's recommendations stemming from this work are explained. Sustained monitoring will be necessary to ensure interventions move the process forward to better serve applicants, programs, and the public good.
PMID: 36538695
ISSN: 1938-808x
CID: 5426192