Try a new search

Format these results:

Searched for:

person:burkrj01

in-biosketch:true

Total Results:

45


Precision Medical Education

Triola, Marc M; Burk-Rafel, Jesse
Medical schools and residency programs are increasingly incorporating personalization of content, pathways, and assessments to align with a competency-based model. Yet, such efforts face challenges involving large amounts of data, sometimes struggling to deliver insights in a timely fashion for trainees, coaches, and programs. In this article, the authors argue that the emerging paradigm of precision medical education (PME) may ameliorate some of these challenges. However, PME lacks a widely accepted definition and a shared model of guiding principles and capacities, limiting widespread adoption. The authors propose defining PME as a systematic approach that integrates longitudinal data and analytics to drive precise educational interventions that address each individual learner's needs and goals in a continuous, timely, and cyclical fashion, ultimately improving meaningful educational, clinical, or system outcomes. Borrowing from precision medicine, they offer an adapted shared framework. In the P4 medical education framework, PME should (1) take a proactive approach to acquiring and using trainee data; (2) generate timely personalized insights through precision analytics (including artificial intelligence and decision-support tools); (3) design precision educational interventions (learning, assessment, coaching, pathways) in a participatory fashion, with trainees at the center as co-producers; and (4) ensure interventions are predictive of meaningful educational, professional, or clinical outcomes. Implementing PME will require new foundational capacities: flexible educational pathways and programs responsive to PME-guided dynamic and competency-based progression; comprehensive longitudinal data on trainees linked to educational and clinical outcomes; shared development of requisite technologies and analytics to effect educational decision-making; and a culture that embraces a precision approach, with research to gather validity evidence for this approach and development efforts targeting new skills needed by learners, coaches, and educational leaders. Anticipating pitfalls in the use of this approach will be important, as will ensuring it deepens, rather than replaces, the interaction of trainees and their coaches.
PMID: 37027222
ISSN: 1938-808x
CID: 5537182

Development and Validation of a Machine Learning-Based Decision Support Tool for Residency Applicant Screening and Review

Burk-Rafel, Jesse; Reinstein, Ilan; Feng, James; Kim, Moosun Brad; Miller, Louis H; Cocks, Patrick M; Marin, Marina; Aphinyanaphongs, Yindalon
PURPOSE:Residency programs face overwhelming numbers of residency applications, limiting holistic review. Artificial intelligence techniques have been proposed to address this challenge but have not been created. Here, a multidisciplinary team sought to develop and validate a machine learning (ML)-based decision support tool (DST) for residency applicant screening and review. METHOD:Categorical applicant data from the 2018, 2019, and 2020 residency application cycles (n = 8,243 applicants) at one large internal medicine residency program were downloaded from the Electronic Residency Application Service and linked to the outcome measure: interview invitation by human reviewers (n = 1,235 invites). An ML model using gradient boosting was designed using training data (80% of applicants) with over 60 applicant features (e.g., demographics, experiences, academic metrics). Model performance was validated on held-out data (20% of applicants). Sensitivity analysis was conducted without United States Medical Licensing Examination (USMLE) scores. An interactive DST incorporating the ML model was designed and deployed that provided applicant- and cohort-level visualizations. RESULTS:The ML model areas under the receiver operating characteristic and precision recall curves were 0.95 and 0.76, respectively; these changed to 0.94 and 0.72, respectively, with removal of USMLE scores. Applicants' medical school information was an important driver of predictions-which had face validity based on the local selection process-but numerous predictors contributed. Program directors used the DST in the 2021 application cycle to select 20 applicants for interview that had been initially screened out during human review. CONCLUSIONS:The authors developed and validated an ML algorithm for predicting residency interview offers from numerous application elements with high performance-even when USMLE scores were removed. Model deployment in a DST highlighted its potential for screening candidates and helped quantify and mitigate biases existing in the selection process. Further work will incorporate unstructured textual data through natural language processing methods.
PMID: 34348383
ISSN: 1938-808x
CID: 5050022

The AMA Graduate Profile: Tracking Medical School Graduates Into Practice

Burk-Rafel, Jesse; Marin, Marina; Triola, Marc; Fancher, Tonya; Ko, Michelle; Mejicano, George; Skochelak, Susan; Santen, Sally A; Richardson, Judee
PMID: 34705676
ISSN: 1938-808x
CID: 5042522

Macy Foundation Innovation Report Part II: From Hype to Reality: Innovators' Visions for Navigating AI Integration Challenges in Medical Education

Gin, Brian C; LaForge, Kate; Burk-Rafel, Jesse; Boscardin, Christy K
PURPOSE/OBJECTIVE:Artificial intelligence (AI) promises to significantly impact medical education, yet its implementation raises important questions about educational effectiveness, ethical use, and equity. In the second part of a 2-part innovation report, which was commissioned by the Josiah Macy Jr. Foundation to inform discussions at a conference on AI in medical education, the authors explore the perspectives of innovators actively integrating AI into medical education, examining their perceptions regarding the impacts, opportunities, challenges, and strategies for successful AI adoption and risk mitigation. METHOD/METHODS:Semi-structured interviews were conducted with 25 medical education AI innovators-including learners, educators, institutional leaders, and industry representatives-from June to August 2024. Interviews explored participants' perceptions of AI's influence on medical education, challenges to integration, and strategies for mitigating challenges. Transcripts were analyzed using thematic analysis to identify themes and synthesize participants' recommendations for AI integration. RESULTS:Innovators' responses were synthesized into 2 main thematic areas: (1) AI's impact on teaching, learning, and assessment, and (2) perceived threats and strategies for mitigating them. Participants identified AI's potential to enact precision education through virtual tutors and standardized patients, support active learning formats, enable centralized teaching, and facilitate cognitive offloading. AI-enhanced assessments could automate grading, predict learner trajectories, and integrate performance data from clinical interactions. Yet, innovators expressed concerns over threats to transparency and validity, potential propagation of biases, risks of over-reliance and deskilling, and institutional disparities. Proposed mitigation strategies emphasized validating AI outputs, establishing foundational competencies, fostering collaboration and open-source sharing, enhancing AI literacy, and maintaining robust ethical standards. CONCLUSIONS:AI innovators in medical education envision transformative opportunities for individualized learning and precision education, balanced against critical threats. Realizing these benefits requires proactive, collaborative efforts to establish rigorous validation frameworks; uphold foundational medical competencies; and prioritize ethical, equitable AI integration.
PMID: 40479503
ISSN: 1938-808x
CID: 5862832

How Data Analytics Can Be Leveraged to Enhance Graduate Clinical Skills Education

Garibaldi, Brian T; Hollon, McKenzie; Knopp, Michelle I; Winkel, Abigail Ford; Burk-Rafel, Jesse; Caretta-Weyer, Holly A
PMCID:12080502
PMID: 40386478
ISSN: 1949-8357
CID: 5852752

Large Language Model-Augmented Strategic Analysis of Innovation Projects in Graduate Medical Education

Winkel, Abigail Ford; Burk-Rafel, Jesse; Terhune, Kyla; Garibaldi, Brian T; DeWaters, Ami L; Co, John Patrick T; Andrews, John S
PMCID:12080501
PMID: 40386486
ISSN: 1949-8357
CID: 5852792

Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality

Schaye, Verity; DiTullio, David J; Sartori, Daniel J; Hauck, Kevin; Haller, Matthew; Reinstein, Ilan; Guzman, Benedict; Burk-Rafel, Jesse
BACKGROUND:Objective measures and large datasets are needed to determine aspects of the Clinical Learning Environment (CLE) impacting the essential skill of clinical reasoning documentation. Artificial Intelligence (AI) offers a solution. Here, the authors sought to determine what aspects of the CLE might be impacting resident clinical reasoning documentation quality assessed by AI. METHODS:In this observational, retrospective cross-sectional analysis of hospital admission notes from the Electronic Health Record (EHR), all categorical internal medicine (IM) residents who wrote at least one admission note during the study period July 1, 2018- June 30, 2023 at two sites of NYU Grossman School of Medicine's IM residency program were included. Clinical reasoning documentation quality of admission notes was determined to be low or high-quality using a supervised machine learning model. From note-level data, the shift (day or night) and note index within shift (if a note was first, second, etc. within shift) were calculated. These aspects of the CLE were included as potential markers of workload, which have been shown to have a strong relationship with resident performance. Patient data was also captured, including age, sex, Charlson Comorbidity Index, and primary diagnosis. The relationship between these variables and clinical reasoning documentation quality was analyzed using generalized estimating equations accounting for resident-level clustering. RESULTS:Across 37,750 notes authored by 474 residents, patients who were older, had more pre-existing comorbidities, and presented with certain primary diagnoses (e.g., infectious and pulmonary conditions) were associated with higher clinical reasoning documentation quality. When controlling for these and other patient factors, variables associated with clinical reasoning documentation quality included academic year (adjusted odds ratio, aOR, for high-quality: 1.10; 95% CI 1.06-1.15; P <.001), night shift (aOR 1.21; 95% CI 1.13-1.30; P <.001), and note index (aOR 0.93; 95% CI 0.90-0.95; P <.001). CONCLUSIONS:AI can be used to assess complex skills such as clinical reasoning in authentic clinical notes that can help elucidate the potential impact of the CLE on resident clinical reasoning documentation quality. Future work should explore residency program and systems interventions to optimize the CLE.
PMCID:12016287
PMID: 40264096
ISSN: 1472-6920
CID: 5830212

Large Language Model-Based Assessment of Clinical Reasoning Documentation in the Electronic Health Record Across Two Institutions: Development and Validation Study

Schaye, Verity; DiTullio, David; Guzman, Benedict Vincent; Vennemeyer, Scott; Shih, Hanniel; Reinstein, Ilan; Weber, Danielle E; Goodman, Abbie; Wu, Danny T Y; Sartori, Daniel J; Santen, Sally A; Gruppen, Larry; Aphinyanaphongs, Yindalon; Burk-Rafel, Jesse
BACKGROUND:Clinical reasoning (CR) is an essential skill; yet, physicians often receive limited feedback. Artificial intelligence holds promise to fill this gap. OBJECTIVE:We report the development of named entity recognition (NER), logic-based and large language model (LLM)-based assessments of CR documentation in the electronic health record across 2 institutions (New York University Grossman School of Medicine [NYU] and University of Cincinnati College of Medicine [UC]). METHODS:-scores for the NER, logic-based model and area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC) for the LLMs. RESULTS:-scores 0.80, 0.74, and 0.80 for D0, D1, D2, respectively. The GatorTron LLM performed best for EA2 scores AUROC/AUPRC 0.75/ 0.69. CONCLUSIONS:This is the first multi-institutional study to apply LLMs for assessing CR documentation in the electronic health record. Such tools can enhance feedback on CR. Lessons learned by implementing these models at distinct institutions support the generalizability of this approach.
PMID: 40117575
ISSN: 1438-8871
CID: 5813782

Community Racial and Ethnic Representation Among Physicians in US Internal Medicine Residency Programs

Kim, Jung G; Lett, Elle; Boscardin, Christy K; Hauer, Karen E; Chen, Isabel L; Henderson, Mark C; Hogan, Sean O; Yamazaki, Kenji; Burk-Rafel, Jesse; Fancher, Tonya; Nguyen, Mytien; Holmboe, Eric S; McDade, William; Boatright, Dowin H
IMPORTANCE/UNASSIGNED:Increasing underrepresented in medicine (URIM) physicians among historically underserved communities helps reduce health disparities. The concordance of URIM physicians with their communities improves access to care, particularly for American Indian and Alaska Native, Black, and Hispanic or Latinx individuals. OBJECTIVES/UNASSIGNED:To explore county-level racial and ethnic representation of US internal medicine (IM) residents, examine racial and ethnic concordance between residents and their communities, and assess whether representation varies by presence of academic institutions or underserved settings. DESIGN, SETTING, AND PARTICIPANTS/UNASSIGNED:This retrospective cross-sectional study collected data from the Association of American Medical Colleges, Accreditation Council for Graduate Medical Education (ACGME), Area Health Resources Files, and US Department of Education data on ACGME-accredited US IM residency programs and their associated county populations. Self-reported racial and ethnic data from 2018 for 4848 residents in 393 IM programs in 205 counties were used. Data were analyzed between February 15 and September 20, 2024. EXPOSURE/UNASSIGNED:County-level presence for academic health centers (AHCs), minority-serving institutions (MSIs), health professional shortage areas (HPSAs), and rurality. MAIN OUTCOMES AND MEASURES/UNASSIGNED:Main outcomes were representation quotients (RQs) or the ratio of the proportion of IM residents and their concordant county-level racial and ethnic populations. Quantile linear regression models on median representation were used to identify the association with URIM, Asian, and White residents by US Census division and county-level AHCs, MSIs, HPSAs, and rurality. RESULTS/UNASSIGNED:Among 4848 residents, 4 (0.08%) self-identified as American Indian or Alaskan Native, 1709 (35.3%) as Asian, 289 (6.0%) as Black, 211 (4.4%) as Hispanic or Latinx, 2 (0.04%) as Native Hawaiian or Other Pacific Islander, and 2633 (54.3%) as White. A total of 761 (15.7%) were classified as URIM. Among URIM groups, American Indian and Alaska Native (mean [SE] RQ, 0.00 [0.04]), Black (mean [SE] RQ, 0.09 [0.20]), Hispanic and Latinx (mean [SE] RQ, 0.00 [0.04]), and Native Hawaiian and other Pacific Islander (mean [SE] RQ, 0.00 [0.26]) residents were grossly underrepresented compared with their training sites' county-level representation. Fifty-one of 205 counties (24.8%) with IM programs had no URIM residents. Black and Hispanic or Latinx residents had higher representation in counties with more MSIs (mean [SD] RQ, 0.19 [0.24]; P = .04; mean [SD] RQ, 0.15 [0.04]; P < .001, respectively), and Hispanic or Latinx residents were less represented in counties with more AHCs (mean [SD] RQ, 0.00 [0.06]; P < .001). Asian residents had lower RQs in counties with more MSIs (mean [SD] RQ, 6.00 [0.65]; P < .001), and White residents had higher representation in counties with greater presence of AHCs (mean [SD] RQ, 0.77 [0.04]; P = .007). CONCLUSIONS AND RELEVANCE/UNASSIGNED:In this cross-sectional study, URIM IM residents remained underrepresented compared with their program's county populations. These findings should inform racial and ethnic diversity policies to address the continuing underrepresentation among graduate medical education physicians, which adversely impacts the care of historically underserved communities.
PMCID:11783195
PMID: 39883461
ISSN: 2574-3805
CID: 5781162

Characterizing Residents' Clinical Experiences-A Step Toward Precision Education

Burk-Rafel, Jesse; Drake, Carolyn B; Sartori, Daniel J
PMID: 39693075
ISSN: 2574-3805
CID: 5764502