Try a new search

Format these results:

Searched for:

in-biosketch:yes

person:schayv01

Total Results:

38


Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality

Schaye, Verity; DiTullio, David J; Sartori, Daniel J; Hauck, Kevin; Haller, Matthew; Reinstein, Ilan; Guzman, Benedict; Burk-Rafel, Jesse
BACKGROUND:Objective measures and large datasets are needed to determine aspects of the Clinical Learning Environment (CLE) impacting the essential skill of clinical reasoning documentation. Artificial Intelligence (AI) offers a solution. Here, the authors sought to determine what aspects of the CLE might be impacting resident clinical reasoning documentation quality assessed by AI. METHODS:In this observational, retrospective cross-sectional analysis of hospital admission notes from the Electronic Health Record (EHR), all categorical internal medicine (IM) residents who wrote at least one admission note during the study period July 1, 2018- June 30, 2023 at two sites of NYU Grossman School of Medicine's IM residency program were included. Clinical reasoning documentation quality of admission notes was determined to be low or high-quality using a supervised machine learning model. From note-level data, the shift (day or night) and note index within shift (if a note was first, second, etc. within shift) were calculated. These aspects of the CLE were included as potential markers of workload, which have been shown to have a strong relationship with resident performance. Patient data was also captured, including age, sex, Charlson Comorbidity Index, and primary diagnosis. The relationship between these variables and clinical reasoning documentation quality was analyzed using generalized estimating equations accounting for resident-level clustering. RESULTS:Across 37,750 notes authored by 474 residents, patients who were older, had more pre-existing comorbidities, and presented with certain primary diagnoses (e.g., infectious and pulmonary conditions) were associated with higher clinical reasoning documentation quality. When controlling for these and other patient factors, variables associated with clinical reasoning documentation quality included academic year (adjusted odds ratio, aOR, for high-quality: 1.10; 95% CI 1.06-1.15; P <.001), night shift (aOR 1.21; 95% CI 1.13-1.30; P <.001), and note index (aOR 0.93; 95% CI 0.90-0.95; P <.001). CONCLUSIONS:AI can be used to assess complex skills such as clinical reasoning in authentic clinical notes that can help elucidate the potential impact of the CLE on resident clinical reasoning documentation quality. Future work should explore residency program and systems interventions to optimize the CLE.
PMCID:12016287
PMID: 40264096
ISSN: 1472-6920
CID: 5830212

Large Language Model-Based Assessment of Clinical Reasoning Documentation in the Electronic Health Record Across Two Institutions: Development and Validation Study

Schaye, Verity; DiTullio, David; Guzman, Benedict Vincent; Vennemeyer, Scott; Shih, Hanniel; Reinstein, Ilan; Weber, Danielle E; Goodman, Abbie; Wu, Danny T Y; Sartori, Daniel J; Santen, Sally A; Gruppen, Larry; Aphinyanaphongs, Yindalon; Burk-Rafel, Jesse
BACKGROUND:Clinical reasoning (CR) is an essential skill; yet, physicians often receive limited feedback. Artificial intelligence holds promise to fill this gap. OBJECTIVE:We report the development of named entity recognition (NER), logic-based and large language model (LLM)-based assessments of CR documentation in the electronic health record across 2 institutions (New York University Grossman School of Medicine [NYU] and University of Cincinnati College of Medicine [UC]). METHODS:-scores for the NER, logic-based model and area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC) for the LLMs. RESULTS:-scores 0.80, 0.74, and 0.80 for D0, D1, D2, respectively. The GatorTron LLM performed best for EA2 scores AUROC/AUPRC 0.75/ 0.69. CONCLUSIONS:This is the first multi-institutional study to apply LLMs for assessing CR documentation in the electronic health record. Such tools can enhance feedback on CR. Lessons learned by implementing these models at distinct institutions support the generalizability of this approach.
PMID: 40117575
ISSN: 1438-8871
CID: 5813782

The generative artificial intelligence revolution: How hospitalists can lead the transformation of medical education

Schaye, Verity; Triola, Marc M
PMID: 38591332
ISSN: 1553-5606
CID: 5725712

Implementing an accelerated three-year MD curriculum at NYU Grossman School of Medicine

Cangiarella, Joan; Rosenfeld, Mel; Poles, Michael; Webster, Tyler; Schaye, Verity; Ruggles, Kelly; Dinsell, Victoria; Triola, Marc M; Gillespie, Colleen; Grossman, Robert I; Abramson, Steven B
Over the last decade there has been tremendous growth in the development of accelerated MD pathways that allow medical students to graduate in three years. Developing an accelerated pathway program requires commitment from students and faculty with intensive re-thinking and altering of the curriculum to ensure adequate content to achieve competency in an accelerated timeline. A re-visioning of assessment and advising must follow and the application of AI and new technologies can be added to support teaching and learning. We describe the curricular revision to an accelerated pathway at NYU Grossman School of Medicine highlighting our thought process, conceptual framework, assessment methods and outcomes over the last ten years.
PMID: 39480996
ISSN: 1466-187x
CID: 5747302

Lessons in clinical reasoning - pitfalls, myths, and pearls: shoulder pain as the first and only manifestation of lung cancer

Díaz-Abad, Julia; Aranaz-Murillo, Amalia; Mayayo-Sinues, Esteban; Canchumanya-Huatuco, Nila; Schaye, Verity
OBJECTIVES/OBJECTIVE:Lung cancer is the leading cause of cancer-related death and poses significant challenges in diagnosis and management. Although muscle metastases are exceedingly rare and typically not the initial clinical manifestation of neoplastic processes, their recognition is crucial for optimal patient care. METHODS:We present a case report in which we identify the unique scenario of a 60-year-old man with shoulder pain and a deltoid muscle mass, initially suggestive of an undifferentiated pleomorphic sarcoma. However, further investigations, including radiological findings and muscle biopsy, revealed an unexpected primary lung adenocarcinoma. We performed a systematic literature search to identify the incidence of SMM and reflect on how to improve and build on better diagnosis for entities as atypical as this. RESULTS:This atypical presentation highlights the importance of recognizing and addressing cognitive biases in clinical decision-making, as acknowledging the possibility of uncommon presentations is vital. By embracing a comprehensive approach that combines imaging studies with histopathological confirmation, healthcare providers can ensure accurate prognoses and appropriate management strategies, ultimately improving patient outcomes. CONCLUSIONS:This case serves as a reminder of the need to remain vigilant, open-minded, and aware of cognitive biases when confronted with uncommon clinical presentations, emphasizing the significance of early recognition and prompt evaluation in achieving optimal patient care.
PMID: 38387019
ISSN: 2194-802x
CID: 5634472

Demystifying AI: Current State and Future Role in Medical Education Assessment

Turner, Laurah; Hashimoto, Daniel A; Vasisht, Shubha; Schaye, Verity
Medical education assessment faces multifaceted challenges, including data complexity, resource constraints, bias, feedback translation, and educational continuity. Traditional approaches often fail to adequately address these issues, creating stressful and inequitable learning environments. This article introduces the concept of precision education, a data-driven paradigm aimed at personalizing the educational experience for each learner. It explores how artificial intelligence (AI), including its subsets machine learning (ML) and deep learning (DL), can augment this model to tackle the inherent limitations of traditional assessment methods.AI can enable proactive data collection, offering consistent and objective assessments while reducing resource burdens. It has the potential to revolutionize not only competency assessment but also participatory interventions, such as personalized coaching and predictive analytics for at-risk trainees. The article also discusses key challenges and ethical considerations in integrating AI into medical education, such as algorithmic transparency, data privacy, and the potential for bias propagation.AI's capacity to process large datasets and identify patterns allows for a more nuanced, individualized approach to medical education. It offers promising avenues not only to improve the efficiency of educational assessments but also to make them more equitable. However, the ethical and technical challenges must be diligently addressed. The article concludes that embracing AI in medical education assessment is a strategic move toward creating a more personalized, effective, and fair educational landscape. This necessitates collaborative, multidisciplinary research and ethical vigilance to ensure that the technology serves educational goals while upholding social justice and ethical integrity.
PMID: 38166201
ISSN: 1938-808x
CID: 5736952

Point-counterpoint: Time to wash away the SOAP note-Or merely rinse it?

Rodman, Adam; Schaye, Verity; Hofmann, Heather; Airan-Javia, Subha L
PMID: 37530094
ISSN: 1553-5606
CID: 5618942

The future of diagnosis - where are we going? [Editorial]

Schaye, Verity; Parsons, Andrew S; Graber, Mark L; Olson, Andrew P J
PMID: 36720463
ISSN: 2194-802x
CID: 5426702

Pharmacists can improve diagnosis and help prevent diagnostic errors

Enomoto, Kiichi; Kosaka, Chintaro; Kimura, Toru; Watanuki, Satoshi; Kurihara, Masaru; Watari, Takashi; Schaye, Verity
We present two cases that highlight the role of pharmacists in the diagnostic process and illustrate how a culture of safety and teamwork between pharmacists and physicians can help prevent diagnostic errors.
PMID: 35089657
ISSN: 2194-802x
CID: 5154882

Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation

Schaye, Verity; Guzman, Benedict; Burk-Rafel, Jesse; Marin, Marina; Reinstein, Ilan; Kudlowitz, David; Miller, Louis; Chun, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND:Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE:The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES/UNASSIGNED:Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS/RESULTS:The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS:The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
PMCID:9296753
PMID: 35710676
ISSN: 1525-1497
CID: 5277902