Searched for: in-biosketch:true
person:burkrj01
Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools
Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 37460502
ISSN: 1938-808x
CID: 5535542
Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score
Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 36287705
ISSN: 1938-808x
CID: 5358032
Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools
Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 36287686
ISSN: 1938-808x
CID: 5358022
Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation
Schaye, Verity; Guzman, Benedict; Burk-Rafel, Jesse; Marin, Marina; Reinstein, Ilan; Kudlowitz, David; Miller, Louis; Chun, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND:Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE:The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES/UNASSIGNED:Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS/RESULTS:The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS:The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
PMCID:9296753
PMID: 35710676
ISSN: 1525-1497
CID: 5277902
Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback
Schaye, Verity; Miller, Louis; Kudlowitz, David; Chun, Jonathan; Burk-Rafel, Jesse; Cocks, Patrick; Guzman, Benedict; Aphinyanaphongs, Yindalon; Marin, Marina
BACKGROUND:Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE:Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES/UNASSIGNED:The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS/RESULTS:The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS:The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.
PMID: 33945113
ISSN: 1525-1497
CID: 4866222
Development and Validation of a Machine Learning-Based Decision Support Tool for Residency Applicant Screening and Review
Burk-Rafel, Jesse; Reinstein, Ilan; Feng, James; Kim, Moosun Brad; Miller, Louis H; Cocks, Patrick M; Marin, Marina; Aphinyanaphongs, Yindalon
PURPOSE:Residency programs face overwhelming numbers of residency applications, limiting holistic review. Artificial intelligence techniques have been proposed to address this challenge but have not been created. Here, a multidisciplinary team sought to develop and validate a machine learning (ML)-based decision support tool (DST) for residency applicant screening and review. METHOD:Categorical applicant data from the 2018, 2019, and 2020 residency application cycles (n = 8,243 applicants) at one large internal medicine residency program were downloaded from the Electronic Residency Application Service and linked to the outcome measure: interview invitation by human reviewers (n = 1,235 invites). An ML model using gradient boosting was designed using training data (80% of applicants) with over 60 applicant features (e.g., demographics, experiences, academic metrics). Model performance was validated on held-out data (20% of applicants). Sensitivity analysis was conducted without United States Medical Licensing Examination (USMLE) scores. An interactive DST incorporating the ML model was designed and deployed that provided applicant- and cohort-level visualizations. RESULTS:The ML model areas under the receiver operating characteristic and precision recall curves were 0.95 and 0.76, respectively; these changed to 0.94 and 0.72, respectively, with removal of USMLE scores. Applicants' medical school information was an important driver of predictions-which had face validity based on the local selection process-but numerous predictors contributed. Program directors used the DST in the 2021 application cycle to select 20 applicants for interview that had been initially screened out during human review. CONCLUSIONS:The authors developed and validated an ML algorithm for predicting residency interview offers from numerous application elements with high performance-even when USMLE scores were removed. Model deployment in a DST highlighted its potential for screening candidates and helped quantify and mitigate biases existing in the selection process. Further work will incorporate unstructured textual data through natural language processing methods.
PMID: 34348383
ISSN: 1938-808x
CID: 5050022
The AMA Graduate Profile: Tracking Medical School Graduates Into Practice
Burk-Rafel, Jesse; Marin, Marina; Triola, Marc; Fancher, Tonya; Ko, Michelle; Mejicano, George; Skochelak, Susan; Santen, Sally A; Richardson, Judee
PMID: 34705676
ISSN: 1938-808x
CID: 5042522
Systems-Level Reforms to the US Resident Selection Process: A Scoping Review
Zastrow, Ryley K; Burk-Rafel, Jesse; London, Daniel A
Background/UNASSIGNED:Calls to reform the US resident selection process are growing, given increasing competition and inefficiencies of the current system. Though numerous reforms have been proposed, they have not been comprehensively cataloged. Objective/UNASSIGNED:This scoping review was conducted to characterize and categorize literature proposing systems-level reforms to the resident selection process. Methods/UNASSIGNED:Following Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, searches of Embase, MEDLINE, Scopus, and Web of Science databases were performed for references published from January 2005 to February 2020. Articles were included if they proposed reforms that were applicable or generalizable to all applicants, medical schools, or residency programs. An inductive approach to qualitative content analysis was used to generate codes and higher-order categories. Results/UNASSIGNED:Of 10 407 unique references screened, 116 met our inclusion criteria. Qualitative analysis generated 34 codes that were grouped into 14 categories according to the broad stages of resident selection: application submission, application review, interviews, and the Match. The most commonly proposed reforms were implementation of an application cap (n = 28), creation of a standardized program database (n = 21), utilization of standardized letters of evaluation (n = 20), and pre-interview screening (n = 13). Conclusions/UNASSIGNED:This scoping review collated and categorized proposed reforms to the resident selection process, developing a common language and framework to facilitate national conversations and change.
PMCID:8207920
PMID: 34178261
ISSN: 1949-8357
CID: 4964962
A Novel Ticket System for Capping Residency Interview Numbers: Reimagining Interviews in the COVID-19 Era
Burk-Rafel, Jesse; Standiford, Taylor C
The 2019 novel coronavirus (COVID-19) pandemic has led to dramatic changes in the 2020 residency application cycle, including halting away rotations and delaying the application timeline. These stressors are laid on top of a resident selection process already under duress with exploding application and interview numbers-the latter likely to be exacerbated with the widespread shift to virtual interviewing. Leveraging their trainee perspective, the authors propose enforcing a cap on the number of interviews that applicants may attend through a novel interview ticket system (ITS). Specialties electing to participate in the ITS would select an evidence-based, specialty-specific interview cap. Applicants would then receive unique electronic tickets-equal in number to the cap-that would be given to participating programs at the time of an interview, when the tickets would be marked as used. The system would be self-enforcing and would ensure each interview represents genuine interest between applicant and program, while potentially increasing the number of interviews-and thus match rate-for less competitive applicants. Limitations of the ITS and alternative approaches for interview capping, including an honor code system, are also discussed. Finally, in the context of capped interview numbers, the authors emphasize the need for transparent preinterview data from programs to inform applicants and their advisors on which interviews to attend, learning from prior experiences and studies on virtual interviewing, adherence to best practices for interviewing, and careful consideration of how virtual interviews may shift inequities in the resident selection process.
PMID: 32910007
ISSN: 1938-808x
CID: 4764712
A Model for Exploring Compatibility Between Applicants and Residency Programs: Right Resident, Right Program
Winkel, Abigail Ford; Morgan, Helen Kang; Burk-Rafel, Jesse; Dalrymple, John L; Chiang, Seine; Marzano, David; Major, Carol; Katz, Nadine T; Ollendorff, Arthur T; Hammoud, Maya M
Holistic review of residency applications is touted as the gold standard for selection, yet vast application numbers leave programs reliant on screening using filters such as United States Medical Licensing Examination scores that do not reliably predict resident performance and may threaten diversity. Applicants struggle to identify which programs to apply to, and devote attention to these processes throughout most of the fourth year, distracting from their clinical education. In this perspective, educators across the undergraduate and graduate medical education continuum propose new models for student-program compatibility based on design thinking sessions with stakeholders in obstetrics and gynecology education from a broad range of training environments. First, we describe a framework for applicant-program compatibility based on applicant priorities and program offerings, including clinical training, academic training, practice setting, residency culture, personal life, and professional goals. Second, a conceptual model for applicant screening based on metrics, experiences, attributes, and alignment with program priorities is presented that might facilitate holistic review. We call for design and validation of novel metrics, such as situational judgment tests for professionalism. Together, these steps could improve the transparency, efficiency and fidelity of the residency application process. The models presented can be adapted to the priorities and values of other specialties.
PMID: 33278296
ISSN: 1873-233x
CID: 4708352