Try a new search

Format these results:

Searched for:

person:burkrj01

in-biosketch:true

Total Results:

35


TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs): A Scalable Approach for Linking Education to Patient Care

Burk-Rafel, Jesse; Sebok-Syer, Stefanie S; Santen, Sally A; Jiang, Joshua; Caretta-Weyer, Holly A; Iturrate, Eduardo; Kelleher, Matthew; Warm, Eric J; Schumacher, Daniel J; Kinnear, Benjamin
Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.
PMCID:10198229
PMID: 37215538
ISSN: 2212-277x
CID: 5503722

Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 36287686
ISSN: 1938-808x
CID: 5358022

Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score

Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 36287705
ISSN: 1938-808x
CID: 5358032

Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 37460502
ISSN: 1938-808x
CID: 5535542

Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score

Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 37460518
ISSN: 1938-808x
CID: 5535552

Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation

Schaye, Verity; Guzman, Benedict; Burk-Rafel, Jesse; Marin, Marina; Reinstein, Ilan; Kudlowitz, David; Miller, Louis; Chun, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND:Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE:The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES/UNASSIGNED:Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS/RESULTS:The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS:The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
PMCID:9296753
PMID: 35710676
ISSN: 1525-1497
CID: 5277902

Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback

Schaye, Verity; Miller, Louis; Kudlowitz, David; Chun, Jonathan; Burk-Rafel, Jesse; Cocks, Patrick; Guzman, Benedict; Aphinyanaphongs, Yindalon; Marin, Marina
BACKGROUND:Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE:Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES/UNASSIGNED:The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS/RESULTS:The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS:The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.
PMID: 33945113
ISSN: 1525-1497
CID: 4866222

Systems-Level Reforms to the US Resident Selection Process: A Scoping Review

Zastrow, Ryley K; Burk-Rafel, Jesse; London, Daniel A
Background/UNASSIGNED:Calls to reform the US resident selection process are growing, given increasing competition and inefficiencies of the current system. Though numerous reforms have been proposed, they have not been comprehensively cataloged. Objective/UNASSIGNED:This scoping review was conducted to characterize and categorize literature proposing systems-level reforms to the resident selection process. Methods/UNASSIGNED:Following Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, searches of Embase, MEDLINE, Scopus, and Web of Science databases were performed for references published from January 2005 to February 2020. Articles were included if they proposed reforms that were applicable or generalizable to all applicants, medical schools, or residency programs. An inductive approach to qualitative content analysis was used to generate codes and higher-order categories. Results/UNASSIGNED:Of 10 407 unique references screened, 116 met our inclusion criteria. Qualitative analysis generated 34 codes that were grouped into 14 categories according to the broad stages of resident selection: application submission, application review, interviews, and the Match. The most commonly proposed reforms were implementation of an application cap (n = 28), creation of a standardized program database (n = 21), utilization of standardized letters of evaluation (n = 20), and pre-interview screening (n = 13). Conclusions/UNASSIGNED:This scoping review collated and categorized proposed reforms to the resident selection process, developing a common language and framework to facilitate national conversations and change.
PMCID:8207920
PMID: 34178261
ISSN: 1949-8357
CID: 4964962

A Model for Exploring Compatibility Between Applicants and Residency Programs: Right Resident, Right Program

Winkel, Abigail Ford; Morgan, Helen Kang; Burk-Rafel, Jesse; Dalrymple, John L; Chiang, Seine; Marzano, David; Major, Carol; Katz, Nadine T; Ollendorff, Arthur T; Hammoud, Maya M
Holistic review of residency applications is touted as the gold standard for selection, yet vast application numbers leave programs reliant on screening using filters such as United States Medical Licensing Examination scores that do not reliably predict resident performance and may threaten diversity. Applicants struggle to identify which programs to apply to, and devote attention to these processes throughout most of the fourth year, distracting from their clinical education. In this perspective, educators across the undergraduate and graduate medical education continuum propose new models for student-program compatibility based on design thinking sessions with stakeholders in obstetrics and gynecology education from a broad range of training environments. First, we describe a framework for applicant-program compatibility based on applicant priorities and program offerings, including clinical training, academic training, practice setting, residency culture, personal life, and professional goals. Second, a conceptual model for applicant screening based on metrics, experiences, attributes, and alignment with program priorities is presented that might facilitate holistic review. We call for design and validation of novel metrics, such as situational judgment tests for professionalism. Together, these steps could improve the transparency, efficiency and fidelity of the residency application process. The models presented can be adapted to the priorities and values of other specialties.
PMID: 33278296
ISSN: 1873-233x
CID: 4708352

A Novel Ticket System for Capping Residency Interview Numbers: Reimagining Interviews in the COVID-19 Era

Burk-Rafel, Jesse; Standiford, Taylor C
The 2019 novel coronavirus (COVID-19) pandemic has led to dramatic changes in the 2020 residency application cycle, including halting away rotations and delaying the application timeline. These stressors are laid on top of a resident selection process already under duress with exploding application and interview numbers-the latter likely to be exacerbated with the widespread shift to virtual interviewing. Leveraging their trainee perspective, the authors propose enforcing a cap on the number of interviews that applicants may attend through a novel interview ticket system (ITS). Specialties electing to participate in the ITS would select an evidence-based, specialty-specific interview cap. Applicants would then receive unique electronic tickets-equal in number to the cap-that would be given to participating programs at the time of an interview, when the tickets would be marked as used. The system would be self-enforcing and would ensure each interview represents genuine interest between applicant and program, while potentially increasing the number of interviews-and thus match rate-for less competitive applicants. Limitations of the ITS and alternative approaches for interview capping, including an honor code system, are also discussed. Finally, in the context of capped interview numbers, the authors emphasize the need for transparent preinterview data from programs to inform applicants and their advisors on which interviews to attend, learning from prior experiences and studies on virtual interviewing, adherence to best practices for interviewing, and careful consideration of how virtual interviews may shift inequities in the resident selection process.
PMID: 32910007
ISSN: 1938-808x
CID: 4764712