Try a new search

Format these results:

Searched for:

in-biosketch:true

person:burkrj01

Total Results:

47


TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs): A Scalable Approach for Linking Education to Patient Care

Burk-Rafel, Jesse; Sebok-Syer, Stefanie S; Santen, Sally A; Jiang, Joshua; Caretta-Weyer, Holly A; Iturrate, Eduardo; Kelleher, Matthew; Warm, Eric J; Schumacher, Daniel J; Kinnear, Benjamin
Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.
PMCID:10198229
PMID: 37215538
ISSN: 2212-277x
CID: 5503722

Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 36287686
ISSN: 1938-808x
CID: 5358022

Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score

Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 37460518
ISSN: 1938-808x
CID: 5535552

Toward (More) Valid Comparison of Residency Applicants' Grades: Cluster Analysis of Clerkship Grade Distributions Across 135 U.S. MD-granting Medical Schools

Burk-Rafel, Jesse; Reinstein, Ilan; Park, Yoon Soo
PMID: 37460502
ISSN: 1938-808x
CID: 5535542

Medical Student Well-Being While Studying for the USMLE Step 1: The Impact of a Goal Score

Rashid, Hanin; Runyon, Christopher; Burk-Rafel, Jesse; Cuddy, Monica M; Dyrbye, Liselotte; Arnhart, Katie; Luciw-Dubas, Ulana; Mechaber, Hilit F; Lieberman, Steve; Paniagua, Miguel
PMID: 36287705
ISSN: 1938-808x
CID: 5358032

Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation

Schaye, Verity; Guzman, Benedict; Burk-Rafel, Jesse; Marin, Marina; Reinstein, Ilan; Kudlowitz, David; Miller, Louis; Chun, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND:Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE:The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES/UNASSIGNED:Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS/RESULTS:The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS:The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
PMCID:9296753
PMID: 35710676
ISSN: 1525-1497
CID: 5277902

Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback

Schaye, Verity; Miller, Louis; Kudlowitz, David; Chun, Jonathan; Burk-Rafel, Jesse; Cocks, Patrick; Guzman, Benedict; Aphinyanaphongs, Yindalon; Marin, Marina
BACKGROUND:Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE:Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES/UNASSIGNED:The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS/RESULTS:The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS:The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.
PMID: 33945113
ISSN: 1525-1497
CID: 4866222

Development and Validation of a Machine Learning-Based Decision Support Tool for Residency Applicant Screening and Review

Burk-Rafel, Jesse; Reinstein, Ilan; Feng, James; Kim, Moosun Brad; Miller, Louis H; Cocks, Patrick M; Marin, Marina; Aphinyanaphongs, Yindalon
PURPOSE:Residency programs face overwhelming numbers of residency applications, limiting holistic review. Artificial intelligence techniques have been proposed to address this challenge but have not been created. Here, a multidisciplinary team sought to develop and validate a machine learning (ML)-based decision support tool (DST) for residency applicant screening and review. METHOD:Categorical applicant data from the 2018, 2019, and 2020 residency application cycles (n = 8,243 applicants) at one large internal medicine residency program were downloaded from the Electronic Residency Application Service and linked to the outcome measure: interview invitation by human reviewers (n = 1,235 invites). An ML model using gradient boosting was designed using training data (80% of applicants) with over 60 applicant features (e.g., demographics, experiences, academic metrics). Model performance was validated on held-out data (20% of applicants). Sensitivity analysis was conducted without United States Medical Licensing Examination (USMLE) scores. An interactive DST incorporating the ML model was designed and deployed that provided applicant- and cohort-level visualizations. RESULTS:The ML model areas under the receiver operating characteristic and precision recall curves were 0.95 and 0.76, respectively; these changed to 0.94 and 0.72, respectively, with removal of USMLE scores. Applicants' medical school information was an important driver of predictions-which had face validity based on the local selection process-but numerous predictors contributed. Program directors used the DST in the 2021 application cycle to select 20 applicants for interview that had been initially screened out during human review. CONCLUSIONS:The authors developed and validated an ML algorithm for predicting residency interview offers from numerous application elements with high performance-even when USMLE scores were removed. Model deployment in a DST highlighted its potential for screening candidates and helped quantify and mitigate biases existing in the selection process. Further work will incorporate unstructured textual data through natural language processing methods.
PMID: 34348383
ISSN: 1938-808x
CID: 5050022

The AMA Graduate Profile: Tracking Medical School Graduates Into Practice

Burk-Rafel, Jesse; Marin, Marina; Triola, Marc; Fancher, Tonya; Ko, Michelle; Mejicano, George; Skochelak, Susan; Santen, Sally A; Richardson, Judee
PMID: 34705676
ISSN: 1938-808x
CID: 5042522

Systems-Level Reforms to the US Resident Selection Process: A Scoping Review

Zastrow, Ryley K; Burk-Rafel, Jesse; London, Daniel A
Background/UNASSIGNED:Calls to reform the US resident selection process are growing, given increasing competition and inefficiencies of the current system. Though numerous reforms have been proposed, they have not been comprehensively cataloged. Objective/UNASSIGNED:This scoping review was conducted to characterize and categorize literature proposing systems-level reforms to the resident selection process. Methods/UNASSIGNED:Following Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, searches of Embase, MEDLINE, Scopus, and Web of Science databases were performed for references published from January 2005 to February 2020. Articles were included if they proposed reforms that were applicable or generalizable to all applicants, medical schools, or residency programs. An inductive approach to qualitative content analysis was used to generate codes and higher-order categories. Results/UNASSIGNED:Of 10 407 unique references screened, 116 met our inclusion criteria. Qualitative analysis generated 34 codes that were grouped into 14 categories according to the broad stages of resident selection: application submission, application review, interviews, and the Match. The most commonly proposed reforms were implementation of an application cap (n = 28), creation of a standardized program database (n = 21), utilization of standardized letters of evaluation (n = 20), and pre-interview screening (n = 13). Conclusions/UNASSIGNED:This scoping review collated and categorized proposed reforms to the resident selection process, developing a common language and framework to facilitate national conversations and change.
PMCID:8207920
PMID: 34178261
ISSN: 1949-8357
CID: 4964962