Searched for: in-biosketch:yes
person:aphiny01
Evaluation of GPT-4 ability to identify and generate patient instructions for actionable incidental radiology findings
Woo, Kar-Mun C; Simon, Gregory W; Akindutire, Olumide; Aphinyanaphongs, Yindalon; Austrian, Jonathan S; Kim, Jung G; Genes, Nicholas; Goldenring, Jacob A; Major, Vincent J; Pariente, Chloé S; Pineda, Edwin G; Kang, Stella K
OBJECTIVES/OBJECTIVE:To evaluate the proficiency of a HIPAA-compliant version of GPT-4 in identifying actionable, incidental findings from unstructured radiology reports of Emergency Department patients. To assess appropriateness of artificial intelligence (AI)-generated, patient-facing summaries of these findings. MATERIALS AND METHODS/METHODS:Radiology reports extracted from the electronic health record of a large academic medical center were manually reviewed to identify non-emergent, incidental findings with high likelihood of requiring follow-up, further sub-stratified as "definitely actionable" (DA) or "possibly actionable-clinical correlation" (PA-CC). Instruction prompts to GPT-4 were developed and iteratively optimized using a validation set of 50 reports. The optimized prompt was then applied to a test set of 430 unseen reports. GPT-4 performance was primarily graded on accuracy identifying either DA or PA-CC findings, then secondarily for DA findings alone. Outputs were reviewed for hallucinations. AI-generated patient-facing summaries were assessed for appropriateness via Likert scale. RESULTS:For the primary outcome (DA or PA-CC), GPT-4 achieved 99.3% recall, 73.6% precision, and 84.5% F-1. For the secondary outcome (DA only), GPT-4 demonstrated 95.2% recall, 77.3% precision, and 85.3% F-1. No findings were "hallucinated" outright. However, 2.8% of cases included generated text about recommendations that were inferred without specific reference. The majority of True Positive AI-generated summaries required no or minor revision. CONCLUSION/CONCLUSIONS:GPT-4 demonstrates proficiency in detecting actionable, incidental findings after refined instruction prompting. AI-generated patient instructions were most often appropriate, but rarely included inferred recommendations. While this technology shows promise to augment diagnostics, active clinician oversight via "human-in-the-loop" workflows remains critical for clinical implementation.
PMID: 38778578
ISSN: 1527-974x
CID: 5654832
Large Language Model-Based Responses to Patients' In-Basket Messages
Small, William R; Wiesenfeld, Batia; Brandfield-Harvey, Beatrix; Jonassen, Zoe; Mandal, Soumik; Stevens, Elizabeth R; Major, Vincent J; Lostraglio, Erin; Szerencsy, Adam; Jones, Simon; Aphinyanaphongs, Yindalon; Johnson, Stephen B; Nov, Oded; Mann, Devin
IMPORTANCE/UNASSIGNED:Virtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful. OBJECTIVES/UNASSIGNED:To assess PCPs' perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy. DESIGN, SETTING, AND PARTICIPANTS/UNASSIGNED:This cross-sectional quality improvement study tested the hypothesis that PCPs' ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI. EXPOSURES/UNASSIGNED:Randomly assigned patient messages coupled with either an HCP message or the draft GenAI response. MAIN OUTCOMES AND MEASURES/UNASSIGNED:PCPs rated responses' information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy. RESULTS/UNASSIGNED:A total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20]; P = .01, U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27]; P = .37; U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47], P = .49, t = -0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23]; P < .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25]; P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8]; P = .002; difference, 31.2%). CONCLUSIONS/UNASSIGNED:In this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs', a significant concern for patients with low health or English literacy.
PMCID:11252893
PMID: 39012633
ISSN: 2574-3805
CID: 5686582
The First Generative AI Prompt-A-Thon in Healthcare: A Novel Approach to Workforce Engagement with a Private Instance of ChatGPT
Small, William R; Malhotra, Kiran; Major, Vincent J; Wiesenfeld, Batia; Lewis, Marisa; Grover, Himanshu; Tang, Huming; Banerjee, Arnab; Jabbour, Michael J; Aphinyanaphongs, Yindalon; Testa, Paul; Austrian, Jonathan S
BACKGROUND:Healthcare crowdsourcing events (e.g. hackathons) facilitate interdisciplinary collaboration and encourage innovation. Peer-reviewed research has not yet considered a healthcare crowdsourcing event focusing on generative artificial intelligence (GenAI), which generates text in response to detailed prompts and has vast potential for improving the efficiency of healthcare organizations. Our event, the New York University Langone Health (NYULH) Prompt-a-thon, primarily sought to inspire and build AI fluency within our diverse NYULH community, and foster collaboration and innovation. Secondarily, we sought to analyze how participants' experience was influenced by their prior GenAI exposure and whether they received sample prompts during the workshop. METHODS:Executing the event required the assembly of an expert planning committee, who recruited diverse participants, anticipated technological challenges, and prepared the event. The event was composed of didactics and workshop sessions, which educated and allowed participants to experiment with using GenAI on real healthcare data. Participants were given novel "project cards" associated with each dataset that illuminated the tasks GenAI could perform and, for a random set of teams, sample prompts to help them achieve each task (the public repository of project cards can be found at https://github.com/smallw03/NYULH-Generative-AI-Prompt-a-thon-Project-Cards). Afterwards, participants were asked to fill out a survey with 7-point Likert-style questions. RESULTS:Our event was successful in educating and inspiring hundreds of enthusiastic in-person and virtual participants across our organization on the responsible use of GenAI in a low-cost and technologically feasible manner. All participants responded positively, on average, to each of the survey questions (e.g., confidence in their ability to use and trust GenAI). Critically, participants reported a self-perceived increase in their likelihood of using and promoting colleagues' use of GenAI for their daily work. No significant differences were seen in the surveys of those who received sample prompts with their project task descriptions. CONCLUSION/CONCLUSIONS:The first healthcare Prompt-a-thon was an overwhelming success, with minimal technological failures, positive responses from diverse participants and staff, and evidence of post-event engagement. These findings will be integral to planning future events at our institution, and to others looking to engage their workforce in utilizing GenAI.
PMCID:11265701
PMID: 39042600
ISSN: 2767-3170
CID: 5686592
Development and external validation of a dynamic risk score for early prediction of cardiogenic shock in cardiac intensive care units using machine learning
Hu, Yuxuan; Lui, Albert; Goldstein, Mark; Sudarshan, Mukund; Tinsay, Andrea; Tsui, Cindy; Maidman, Samuel D; Medamana, John; Jethani, Neil; Puli, Aahlad; Nguy, Vuthy; Aphinyanaphongs, Yindalon; Kiefer, Nicholas; Smilowitz, Nathaniel R; Horowitz, James; Ahuja, Tania; Fishman, Glenn I; Hochman, Judith; Katz, Stuart; Bernard, Samuel; Ranganath, Rajesh
BACKGROUND:Myocardial infarction and heart failure are major cardiovascular diseases that affect millions of people in the US with the morbidity and mortality being highest among patients who develop cardiogenic shock. Early recognition of cardiogenic shock allows prompt implementation of treatment measures. Our objective is to develop a new dynamic risk score, called CShock, to improve early detection of cardiogenic shock in cardiac intensive care unit (ICU). METHODS:We developed and externally validated a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict onset of cardiogenic shock. We prepared a cardiac ICU dataset using MIMIC-III database by annotating with physician adjudicated outcomes. This dataset that consisted of 1500 patients with 204 having cardiogenic/mixed shock was then used to train CShock. The features used to train the model for CShock included patient demographics, cardiac ICU admission diagnoses, routinely measured laboratory values and vital signs, and relevant features manually extracted from echocardiogram and left heart catheterization reports. We externally validated the risk model on the New York University (NYU) Langone Health cardiac ICU database that was also annotated with physician adjudicated outcomes. The external validation cohort consisted of 131 patients with 25 patients experiencing cardiogenic/mixed shock. RESULTS:CShock achieved an area under the receiver operator characteristic curve (AUROC) of 0.821 (95% CI 0.792-0.850). CShock was externally validated in the more contemporary NYU cohort and achieved an AUROC of 0.800 (95% CI 0.717-0.884), demonstrating its generalizability in other cardiac ICUs. Having an elevated heart rate is most predictive of cardiogenic shock development based on Shapley values. The other top ten predictors are having an admission diagnosis of myocardial infarction with ST-segment elevation, having an admission diagnosis of acute decompensated heart failure, Braden Scale, Glasgow Coma Scale, Blood urea nitrogen, Systolic blood pressure, Serum chloride, Serum sodium, and Arterial blood pH. CONCLUSIONS:The novel CShock score has the potential to provide automated detection and early warning for cardiogenic shock and improve the outcomes for the millions of patients who suffer from myocardial infarction and heart failure.
PMID: 38518758
ISSN: 2048-8734
CID: 5640892
QTNet: Predicting Drug-Induced QT Prolongation With Artificial Intelligence-Enabled Electrocardiograms
Zhang, Hao; Tarabanis, Constantine; Jethani, Neil; Goldstein, Mark; Smith, Silas; Chinitz, Larry; Ranganath, Rajesh; Aphinyanaphongs, Yindalon; Jankelson, Lior
BACKGROUND:Prediction of drug-induced long QT syndrome (diLQTS) is of critical importance given its association with torsades de pointes. There is no reliable method for the outpatient prediction of diLQTS. OBJECTIVES/OBJECTIVE:This study sought to evaluate the use of a convolutional neural network (CNN) applied to electrocardiograms (ECGs) to predict diLQTS in an outpatient population. METHODS:We identified all adult outpatients newly prescribed a QT-prolonging medication between January 1, 2003, and March 31, 2022, who had a 12-lead sinus ECG in the preceding 6 months. Using risk factor data and the ECG signal as inputs, the CNN QTNet was implemented in TensorFlow to predict diLQTS. RESULTS:Models were evaluated in a held-out test dataset of 44,386 patients (57% female) with a median age of 62 years. Compared with 3 other models relying on risk factors or ECG signal or baseline QTc alone, QTNet achieved the best (P < 0.001) performance with a mean area under the curve of 0.802 (95% CI: 0.786-0.818). In a survival analysis, QTNet also had the highest inverse probability of censorship-weighted area under the receiver-operating characteristic curve at day 2 (0.875; 95% CI: 0.848-0.904) and up to 6 months. In a subgroup analysis, QTNet performed best among males and patients ≤50 years or with baseline QTc <450 ms. In an external validation cohort of solely suburban outpatient practices, QTNet similarly maintained the highest predictive performance. CONCLUSIONS:An ECG-based CNN can accurately predict diLQTS in the outpatient setting while maintaining its predictive performance over time. In the outpatient setting, our model could identify higher-risk individuals who would benefit from closer monitoring.
PMID: 38703162
ISSN: 2405-5018
CID: 5658252
Generative Artificial Intelligence to Transform Inpatient Discharge Summaries to Patient-Friendly Language and Format
Zaretsky, Jonah; Kim, Jeong Min; Baskharoun, Samuel; Zhao, Yunan; Austrian, Jonathan; Aphinyanaphongs, Yindalon; Gupta, Ravi; Blecker, Saul B; Feldman, Jonah
IMPORTANCE/UNASSIGNED:By law, patients have immediate access to discharge notes in their medical records. Technical language and abbreviations make notes difficult to read and understand for a typical patient. Large language models (LLMs [eg, GPT-4]) have the potential to transform these notes into patient-friendly language and format. OBJECTIVE/UNASSIGNED:To determine whether an LLM can transform discharge summaries into a format that is more readable and understandable. DESIGN, SETTING, AND PARTICIPANTS/UNASSIGNED:This cross-sectional study evaluated a sample of the discharge summaries of adult patients discharged from the General Internal Medicine service at NYU (New York University) Langone Health from June 1 to 30, 2023. Patients discharged as deceased were excluded. All discharge summaries were processed by the LLM between July 26 and August 5, 2023. INTERVENTIONS/UNASSIGNED:A secure Health Insurance Portability and Accountability Act-compliant platform, Microsoft Azure OpenAI, was used to transform these discharge summaries into a patient-friendly format between July 26 and August 5, 2023. MAIN OUTCOMES AND MEASURES/UNASSIGNED:Outcomes included readability as measured by Flesch-Kincaid Grade Level and understandability using Patient Education Materials Assessment Tool (PEMAT) scores. Readability and understandability of the original discharge summaries were compared with the transformed, patient-friendly discharge summaries created through the LLM. As balancing metrics, accuracy and completeness of the patient-friendly version were measured. RESULTS/UNASSIGNED:Discharge summaries of 50 patients (31 female [62.0%] and 19 male [38.0%]) were included. The median patient age was 65.5 (IQR, 59.0-77.5) years. Mean (SD) Flesch-Kincaid Grade Level was significantly lower in the patient-friendly discharge summaries (6.2 [0.5] vs 11.0 [1.5]; P < .001). PEMAT understandability scores were significantly higher for patient-friendly discharge summaries (81% vs 13%; P < .001). Two physicians reviewed each patient-friendly discharge summary for accuracy on a 6-point scale, with 54 of 100 reviews (54.0%) giving the best possible rating of 6. Summaries were rated entirely complete in 56 reviews (56.0%). Eighteen reviews noted safety concerns, mostly involving omissions, but also several inaccurate statements (termed hallucinations). CONCLUSIONS AND RELEVANCE/UNASSIGNED:The findings of this cross-sectional study of 50 discharge summaries suggest that LLMs can be used to translate discharge summaries into patient-friendly language and formats that are significantly more readable and understandable than discharge summaries as they appear in electronic health records. However, implementation will require improvements in accuracy, completeness, and safety. Given the safety concerns, initial implementation will require physician review.
PMID: 38466307
ISSN: 2574-3805
CID: 5678332
Evaluating Large Language Models in Extracting Cognitive Exam Dates and Scores
Zhang, Hao; Jethani, Neil; Jones, Simon; Genes, Nicholas; Major, Vincent J; Jaffe, Ian S; Cardillo, Anthony B; Heilenbach, Noah; Ali, Nadia Fazal; Bonanni, Luke J; Clayburn, Andrew J; Khera, Zain; Sadler, Erica C; Prasad, Jaideep; Schlacter, Jamie; Liu, Kevin; Silva, Benjamin; Montgomery, Sophie; Kim, Eric J; Lester, Jacob; Hill, Theodore M; Avoricani, Alba; Chervonski, Ethan; Davydov, James; Small, William; Chakravartty, Eesha; Grover, Himanshu; Dodson, John A; Brody, Abraham A; Aphinyanaphongs, Yindalon; Masurkar, Arjun; Razavian, Narges
IMPORTANCE/UNASSIGNED:Large language models (LLMs) are crucial for medical tasks. Ensuring their reliability is vital to avoid false results. Our study assesses two state-of-the-art LLMs (ChatGPT and LlaMA-2) for extracting clinical information, focusing on cognitive tests like MMSE and CDR. OBJECTIVE/UNASSIGNED:Evaluate ChatGPT and LlaMA-2 performance in extracting MMSE and CDR scores, including their associated dates. METHODS/UNASSIGNED:Our data consisted of 135,307 clinical notes (Jan 12th, 2010 to May 24th, 2023) mentioning MMSE, CDR, or MoCA. After applying inclusion criteria 34,465 notes remained, of which 765 underwent ChatGPT (GPT-4) and LlaMA-2, and 22 experts reviewed the responses. ChatGPT successfully extracted MMSE and CDR instances with dates from 742 notes. We used 20 notes for fine-tuning and training the reviewers. The remaining 722 were assigned to reviewers, with 309 each assigned to two reviewers simultaneously. Inter-rater-agreement (Fleiss' Kappa), precision, recall, true/false negative rates, and accuracy were calculated. Our study follows TRIPOD reporting guidelines for model validation. RESULTS/UNASSIGNED:For MMSE information extraction, ChatGPT (vs. LlaMA-2) achieved accuracy of 83% (vs. 66.4%), sensitivity of 89.7% (vs. 69.9%), true-negative rates of 96% (vs 60.0%), and precision of 82.7% (vs 62.2%). For CDR the results were lower overall, with accuracy of 87.1% (vs. 74.5%), sensitivity of 84.3% (vs. 39.7%), true-negative rates of 99.8% (98.4%), and precision of 48.3% (vs. 16.1%). We qualitatively evaluated the MMSE errors of ChatGPT and LlaMA-2 on double-reviewed notes. LlaMA-2 errors included 27 cases of total hallucination, 19 cases of reporting other scores instead of MMSE, 25 missed scores, and 23 cases of reporting only the wrong date. In comparison, ChatGPT's errors included only 3 cases of total hallucination, 17 cases of wrong test reported instead of MMSE, and 19 cases of reporting a wrong date. CONCLUSIONS/UNASSIGNED:In this diagnostic/prognostic study of ChatGPT and LlaMA-2 for extracting cognitive exam dates and scores from clinical notes, ChatGPT exhibited high accuracy, with better performance compared to LlaMA-2. The use of LLMs could benefit dementia research and clinical care, by identifying eligible patients for treatments initialization or clinical trial enrollments. Rigorous evaluation of LLMs is crucial to understanding their capabilities and limitations.
PMCID:10888985
PMID: 38405784
CID: 5722422
Seeing Beyond Borders: Evaluating LLMs in Multilingual Ophthalmological Question Answering
Chapter by: Restrepo, David; Nakayama, Luis Filipe; Dychiao, Robyn Gayle; Wu, Chenwei; Mccoy, Liam G.; Artiaga, Jose Carlo; Cobanaj, Marisa; Matos, Joao; Gallifant, Jack; Bitterman, Danielle S.; Ferrer, Vincenz; Aphinyanaphongs, Yindalon; Anthony Celi, Leo
in: Proceedings - 2024 IEEE 12th International Conference on Healthcare Informatics, ICHI 2024 by
[S.l.] : Institute of Electrical and Electronics Engineers Inc., 2024
pp. 565-566
ISBN: 9798350383737
CID: 5716522
Predicting Robotic Hysterectomy Incision Time: Optimizing Surgical Scheduling with Machine Learning
Shah, Vaishali; Yung, Halley C; Yang, Jie; Zaslavsky, Justin; Algarroba, Gabriela N; Pullano, Alyssa; Karpel, Hannah C; Munoz, Nicole; Aphinyanaphongs, Yindalon; Saraceni, Mark; Shah, Paresh; Jones, Simon; Huang, Kathy
BACKGROUND AND OBJECTIVES/UNASSIGNED:Operating rooms (ORs) are critical for hospital revenue and cost management, with utilization efficiency directly affecting financial outcomes. Traditional surgical scheduling often results in suboptimal OR use. We aim to build a machine learning (ML) model to predict incision times for robotic-assisted hysterectomies, enhancing scheduling accuracy and hospital finances. METHODS/UNASSIGNED:A retrospective study was conducted using data from robotic-assisted hysterectomy cases performed between January 2017 and April 2021 across 3 hospitals within a large academic health system. Cases were filtered for surgeries performed by high-volume surgeons and those with an incision time of under 3 hours (n = 2,702). Features influencing incision time were extracted from electronic medical records and used to train 5 ML models (linear ridge regression, random forest, XGBoost, CatBoost, and explainable boosting machine [EBM]). Model performance was evaluated using a dynamic monthly update process and novel metrics such as wait-time blocks and excess-time blocks. RESULTS/UNASSIGNED: < .001, 95% CI [-329 to -89]), translating to approximately 52-hours over the 51-month study period. The model predicted more surgeries within a 15% range of the true incision time compared to traditional methods. Influential features included surgeon experience, number of additional procedures, body mass index (BMI), and uterine size. CONCLUSION/UNASSIGNED:The ML model enhanced the prediction of incision times for robotic-assisted hysterectomies, providing a potential solution to reduce OR underutilization and increase surgical throughput and hospital revenue.
PMCID:11741200
PMID: 39831273
ISSN: 1938-3797
CID: 5778432
Marketing and US Food and Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical Devices: A Systematic Review
Clark, Phoebe; Kim, Jayne; Aphinyanaphongs, Yindalon
IMPORTANCE:The marketing of health care devices enabled for use with artificial intelligence (AI) or machine learning (ML) is regulated in the US by the US Food and Drug Administration (FDA), which is responsible for approving and regulating medical devices. Currently, there are no uniform guidelines set by the FDA to regulate AI- or ML-enabled medical devices, and discrepancies between FDA-approved indications for use and device marketing require articulation. OBJECTIVE:To explore any discrepancy between marketing and 510(k) clearance of AI- or ML-enabled medical devices. EVIDENCE REVIEW:This systematic review was a manually conducted survey of 510(k) approval summaries and accompanying marketing materials of devices approved between November 2021 and March 2022, conducted between March and November 2022, following the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guideline. Analysis focused on the prevalence of discrepancies between marketing and certification material for AI/ML enabled medical devices. FINDINGS:A total of 119 FDA 510(k) clearance summaries were analyzed in tandem with their respective marketing materials. The devices were taxonomized into 3 individual categories of adherent, contentious, and discrepant devices. A total of 15 devices (12.61%) were considered discrepant, 8 devices (6.72%) were considered contentious, and 96 devices (84.03%) were consistent between marketing and FDA 510(k) clearance summaries. Most devices were from the radiological approval committees (75 devices [82.35%]), with 62 of these devices (82.67%) adherent, 3 (4.00%) contentious, and 10 (13.33%) discrepant; followed by the cardiovascular device approval committee (23 devices [19.33%]), with 19 of these devices (82.61%) considered adherent, 2 contentious (8.70%) and 2 discrepant (8.70%). The difference between these 3 categories in cardiovascular and radiological devices was statistically significant (P < .001). CONCLUSIONS AND RELEVANCE:In this systematic review, low adherence rates within committees were observed most often in committees with few AI- or ML-enabled devices. and discrepancies between clearance documentation and marketing material were present in one-fifth of devices surveyed.
PMID: 37405771
ISSN: 2574-3805
CID: 5536832