Try a new search

Format these results:

Searched for:

person:aphiny01

in-biosketch:yes

Total Results:

92


Generative Artificial Intelligence to Transform Inpatient Discharge Summaries to Patient-Friendly Language and Format

Zaretsky, Jonah; Kim, Jeong Min; Baskharoun, Samuel; Zhao, Yunan; Austrian, Jonathan; Aphinyanaphongs, Yindalon; Gupta, Ravi; Blecker, Saul B; Feldman, Jonah
IMPORTANCE/UNASSIGNED:By law, patients have immediate access to discharge notes in their medical records. Technical language and abbreviations make notes difficult to read and understand for a typical patient. Large language models (LLMs [eg, GPT-4]) have the potential to transform these notes into patient-friendly language and format. OBJECTIVE/UNASSIGNED:To determine whether an LLM can transform discharge summaries into a format that is more readable and understandable. DESIGN, SETTING, AND PARTICIPANTS/UNASSIGNED:This cross-sectional study evaluated a sample of the discharge summaries of adult patients discharged from the General Internal Medicine service at NYU (New York University) Langone Health from June 1 to 30, 2023. Patients discharged as deceased were excluded. All discharge summaries were processed by the LLM between July 26 and August 5, 2023. INTERVENTIONS/UNASSIGNED:A secure Health Insurance Portability and Accountability Act-compliant platform, Microsoft Azure OpenAI, was used to transform these discharge summaries into a patient-friendly format between July 26 and August 5, 2023. MAIN OUTCOMES AND MEASURES/UNASSIGNED:Outcomes included readability as measured by Flesch-Kincaid Grade Level and understandability using Patient Education Materials Assessment Tool (PEMAT) scores. Readability and understandability of the original discharge summaries were compared with the transformed, patient-friendly discharge summaries created through the LLM. As balancing metrics, accuracy and completeness of the patient-friendly version were measured. RESULTS/UNASSIGNED:Discharge summaries of 50 patients (31 female [62.0%] and 19 male [38.0%]) were included. The median patient age was 65.5 (IQR, 59.0-77.5) years. Mean (SD) Flesch-Kincaid Grade Level was significantly lower in the patient-friendly discharge summaries (6.2 [0.5] vs 11.0 [1.5]; P < .001). PEMAT understandability scores were significantly higher for patient-friendly discharge summaries (81% vs 13%; P < .001). Two physicians reviewed each patient-friendly discharge summary for accuracy on a 6-point scale, with 54 of 100 reviews (54.0%) giving the best possible rating of 6. Summaries were rated entirely complete in 56 reviews (56.0%). Eighteen reviews noted safety concerns, mostly involving omissions, but also several inaccurate statements (termed hallucinations). CONCLUSIONS AND RELEVANCE/UNASSIGNED:The findings of this cross-sectional study of 50 discharge summaries suggest that LLMs can be used to translate discharge summaries into patient-friendly language and formats that are significantly more readable and understandable than discharge summaries as they appear in electronic health records. However, implementation will require improvements in accuracy, completeness, and safety. Given the safety concerns, initial implementation will require physician review.
PMID: 38466307
ISSN: 2574-3805
CID: 5678332

Evaluating Large Language Models in Extracting Cognitive Exam Dates and Scores

Zhang, Hao; Jethani, Neil; Jones, Simon; Genes, Nicholas; Major, Vincent J; Jaffe, Ian S; Cardillo, Anthony B; Heilenbach, Noah; Ali, Nadia Fazal; Bonanni, Luke J; Clayburn, Andrew J; Khera, Zain; Sadler, Erica C; Prasad, Jaideep; Schlacter, Jamie; Liu, Kevin; Silva, Benjamin; Montgomery, Sophie; Kim, Eric J; Lester, Jacob; Hill, Theodore M; Avoricani, Alba; Chervonski, Ethan; Davydov, James; Small, William; Chakravartty, Eesha; Grover, Himanshu; Dodson, John A; Brody, Abraham A; Aphinyanaphongs, Yindalon; Masurkar, Arjun; Razavian, Narges
IMPORTANCE/UNASSIGNED:Large language models (LLMs) are crucial for medical tasks. Ensuring their reliability is vital to avoid false results. Our study assesses two state-of-the-art LLMs (ChatGPT and LlaMA-2) for extracting clinical information, focusing on cognitive tests like MMSE and CDR. OBJECTIVE/UNASSIGNED:Evaluate ChatGPT and LlaMA-2 performance in extracting MMSE and CDR scores, including their associated dates. METHODS/UNASSIGNED:Our data consisted of 135,307 clinical notes (Jan 12th, 2010 to May 24th, 2023) mentioning MMSE, CDR, or MoCA. After applying inclusion criteria 34,465 notes remained, of which 765 underwent ChatGPT (GPT-4) and LlaMA-2, and 22 experts reviewed the responses. ChatGPT successfully extracted MMSE and CDR instances with dates from 742 notes. We used 20 notes for fine-tuning and training the reviewers. The remaining 722 were assigned to reviewers, with 309 each assigned to two reviewers simultaneously. Inter-rater-agreement (Fleiss' Kappa), precision, recall, true/false negative rates, and accuracy were calculated. Our study follows TRIPOD reporting guidelines for model validation. RESULTS/UNASSIGNED:For MMSE information extraction, ChatGPT (vs. LlaMA-2) achieved accuracy of 83% (vs. 66.4%), sensitivity of 89.7% (vs. 69.9%), true-negative rates of 96% (vs 60.0%), and precision of 82.7% (vs 62.2%). For CDR the results were lower overall, with accuracy of 87.1% (vs. 74.5%), sensitivity of 84.3% (vs. 39.7%), true-negative rates of 99.8% (98.4%), and precision of 48.3% (vs. 16.1%). We qualitatively evaluated the MMSE errors of ChatGPT and LlaMA-2 on double-reviewed notes. LlaMA-2 errors included 27 cases of total hallucination, 19 cases of reporting other scores instead of MMSE, 25 missed scores, and 23 cases of reporting only the wrong date. In comparison, ChatGPT's errors included only 3 cases of total hallucination, 17 cases of wrong test reported instead of MMSE, and 19 cases of reporting a wrong date. CONCLUSIONS/UNASSIGNED:In this diagnostic/prognostic study of ChatGPT and LlaMA-2 for extracting cognitive exam dates and scores from clinical notes, ChatGPT exhibited high accuracy, with better performance compared to LlaMA-2. The use of LLMs could benefit dementia research and clinical care, by identifying eligible patients for treatments initialization or clinical trial enrollments. Rigorous evaluation of LLMs is crucial to understanding their capabilities and limitations.
PMCID:10888985
PMID: 38405784
CID: 5722422

Seeing Beyond Borders: Evaluating LLMs in Multilingual Ophthalmological Question Answering

Chapter by: Restrepo, David; Nakayama, Luis Filipe; Dychiao, Robyn Gayle; Wu, Chenwei; Mccoy, Liam G.; Artiaga, Jose Carlo; Cobanaj, Marisa; Matos, Joao; Gallifant, Jack; Bitterman, Danielle S.; Ferrer, Vincenz; Aphinyanaphongs, Yindalon; Anthony Celi, Leo
in: Proceedings - 2024 IEEE 12th International Conference on Healthcare Informatics, ICHI 2024 by
[S.l.] : Institute of Electrical and Electronics Engineers Inc., 2024
pp. 565-566
ISBN: 9798350383737
CID: 5716522

Marketing and US Food and Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical Devices: A Systematic Review

Clark, Phoebe; Kim, Jayne; Aphinyanaphongs, Yindalon
IMPORTANCE:The marketing of health care devices enabled for use with artificial intelligence (AI) or machine learning (ML) is regulated in the US by the US Food and Drug Administration (FDA), which is responsible for approving and regulating medical devices. Currently, there are no uniform guidelines set by the FDA to regulate AI- or ML-enabled medical devices, and discrepancies between FDA-approved indications for use and device marketing require articulation. OBJECTIVE:To explore any discrepancy between marketing and 510(k) clearance of AI- or ML-enabled medical devices. EVIDENCE REVIEW:This systematic review was a manually conducted survey of 510(k) approval summaries and accompanying marketing materials of devices approved between November 2021 and March 2022, conducted between March and November 2022, following the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guideline. Analysis focused on the prevalence of discrepancies between marketing and certification material for AI/ML enabled medical devices. FINDINGS:A total of 119 FDA 510(k) clearance summaries were analyzed in tandem with their respective marketing materials. The devices were taxonomized into 3 individual categories of adherent, contentious, and discrepant devices. A total of 15 devices (12.61%) were considered discrepant, 8 devices (6.72%) were considered contentious, and 96 devices (84.03%) were consistent between marketing and FDA 510(k) clearance summaries. Most devices were from the radiological approval committees (75 devices [82.35%]), with 62 of these devices (82.67%) adherent, 3 (4.00%) contentious, and 10 (13.33%) discrepant; followed by the cardiovascular device approval committee (23 devices [19.33%]), with 19 of these devices (82.61%) considered adherent, 2 contentious (8.70%) and 2 discrepant (8.70%). The difference between these 3 categories in cardiovascular and radiological devices was statistically significant (P < .001). CONCLUSIONS AND RELEVANCE:In this systematic review, low adherence rates within committees were observed most often in committees with few AI- or ML-enabled devices. and discrepancies between clearance documentation and marketing material were present in one-fifth of devices surveyed.
PMID: 37405771
ISSN: 2574-3805
CID: 5536832

Health system-scale language models are all-purpose prediction engines

Jiang, Lavender Yao; Liu, Xujin Chris; Nejatian, Nima Pour; Nasir-Moin, Mustafa; Wang, Duo; Abidin, Anas; Eaton, Kevin; Riina, Howard Antony; Laufer, Ilya; Punjabi, Paawan; Miceli, Madeline; Kim, Nora C; Orillac, Cordelia; Schnurman, Zane; Livia, Christopher; Weiss, Hannah; Kurland, David; Neifert, Sean; Dastagirzada, Yosef; Kondziolka, Douglas; Cheung, Alexander T M; Yang, Grace; Cao, Ming; Flores, Mona; Costa, Anthony B; Aphinyanaphongs, Yindalon; Cho, Kyunghyun; Oermann, Eric Karl
Physicians make critical time-constrained decisions every day. Clinical predictive models can help physicians and administrators make decisions by forecasting clinical and operational events. Existing structured data-based clinical predictive models have limited use in everyday practice owing to complexity in data processing, as well as model development and deployment1-3. Here we show that unstructured clinical notes from the electronic health record can enable the training of clinical language models, which can be used as all-purpose clinical predictive engines with low-resistance development and deployment. Our approach leverages recent advances in natural language processing4,5 to train a large language model for medical language (NYUTron) and subsequently fine-tune it across a wide range of clinical and operational predictive tasks. We evaluated our approach within our health system for five such tasks: 30-day all-cause readmission prediction, in-hospital mortality prediction, comorbidity index prediction, length of stay prediction, and insurance denial prediction. We show that NYUTron has an area under the curve (AUC) of 78.7-94.9%, with an improvement of 5.36-14.7% in the AUC compared with traditional models. We additionally demonstrate the benefits of pretraining with clinical text, the potential for increasing generalizability to different sites through fine-tuning and the full deployment of our system in a prospective, single-arm trial. These results show the potential for using clinical language models in medicine to read alongside physicians and provide guidance at the point of care.
PMCID:10338337
PMID: 37286606
ISSN: 1476-4687
CID: 5536672

Methods and Impact for Using Federated Learning to Collaborate on Clinical Research

Cheung, Alexander T M; Nasir-Moin, Mustafa; Fred Kwon, Young Joon; Guan, Jiahui; Liu, Chris; Jiang, Lavender; Raimondo, Christian; Chotai, Silky; Chambless, Lola; Ahmad, Hasan S; Chauhan, Daksh; Yoon, Jang W; Hollon, Todd; Buch, Vivek; Kondziolka, Douglas; Chen, Dinah; Al-Aswad, Lama A; Aphinyanaphongs, Yindalon; Oermann, Eric Karl
BACKGROUND:The development of accurate machine learning algorithms requires sufficient quantities of diverse data. This poses a challenge in health care because of the sensitive and siloed nature of biomedical information. Decentralized algorithms through federated learning (FL) avoid data aggregation by instead distributing algorithms to the data before centrally updating one global model. OBJECTIVE:To establish a multicenter collaboration and assess the feasibility of using FL to train machine learning models for intracranial hemorrhage (ICH) detection without sharing data between sites. METHODS:Five neurosurgery departments across the United States collaborated to establish a federated network and train a convolutional neural network to detect ICH on computed tomography scans. The global FL model was benchmarked against a standard, centrally trained model using a held-out data set and was compared against locally trained models using site data. RESULTS:A federated network of practicing neurosurgeon scientists was successfully initiated to train a model for predicting ICH. The FL model achieved an area under the ROC curve of 0.9487 (95% CI 0.9471-0.9503) when predicting all subtypes of ICH compared with a benchmark (non-FL) area under the ROC curve of 0.9753 (95% CI 0.9742-0.9764), although performance varied by subtype. The FL model consistently achieved top three performance when validated on any site's data, suggesting improved generalizability. A qualitative survey described the experience of participants in the federated network. CONCLUSION/CONCLUSIONS:This study demonstrates the feasibility of implementing a federated network for multi-institutional collaboration among clinicians and using FL to conduct machine learning research, thereby opening a new paradigm for neurosurgical collaboration.
PMID: 36399428
ISSN: 1524-4040
CID: 5385002

Enabling AI-Augmented Clinical Workflows by Accessing Patient Data in Real-Time with FHIR

Chapter by: Major, Vincent J.; Wang, Walter; Aphinyanaphongs, Yindalon
in: Proceedings - 2023 IEEE 11th International Conference on Healthcare Informatics, ICHI 2023 by
[S.l.] : Institute of Electrical and Electronics Engineers Inc., 2023
pp. 531-533
ISBN: 9798350302639
CID: 5630942

Ten Years of Health Informatics Education for Physicians

Chapter by: Major, Vincent J.; Plottel, Claudia S.; Aphinyanaphongs, Yindalon
in: Proceedings - 2023 IEEE 11th International Conference on Healthcare Informatics, ICHI 2023 by
[S.l.] : Institute of Electrical and Electronics Engineers Inc., 2023
pp. 637-644
ISBN: 9798350302639
CID: 5630952

AI model transferability in healthcare: a sociotechnical perspective

Wiesenfeld, Batia Mishan; Aphinyanaphongs, Yin; Nov, Oded
SCOPUS:85139986644
ISSN: 2522-5839
CID: 5350312

Predicting Post-Operative C. difficile Infection (CDI) With Automated Machine Learning (AutoML) Algorithms Using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) Database [Meeting Abstract]

Thangirala, A; Li, T; Abaza, E; Aphinyanaphongs, Y; Axelrad, J; Chen, J; Kelleher, A; Oeding, J; Hu, E; Martin, J; Katz, G; Brejt, S; Castillo, G; Ostberg, N; Kan, K
Introduction: Clostridium difficile infection (CDI) is one of the most common hospital-acquired infections leading to prolonged hospitalization and significant morbidity. Only a few prior studies have developed predictive risk models for CDI and all but one have utilized logistic regression (LR) models to identify risk factors. Automated machine learning (AutoML) programs consistently outperform standard LR models in non-medical contexts. This study aims to investigate the utility of AutoML methods in developing a model for post-operative CDI prediction.
Method(s): We used an AutoML system developed by Amazon, Autogluon v0.3.1, to evaluate the prediction accuracy of post-surgical CDI using the 2016-2018 ACS NSQIP database. A total of A total of 3,049,617 patients and 79 pre-operative features were included in the model. Post-operative CDI was defined as CDI within 30 days of surgery. Models were trained for 4 hours to optimize performance on the Brier score, with lower being better. Validation of all performance metrics was done using the 2019 NSQIP database.
Result(s): 0.36% of the patients (n = 11,001) developed post-operative CDI. Brier scores were calculated for each model with the top performing model being an ensembled neural net model having a Brier score of 0.0027 on the test set. The corresponding AUROC and AUC-PR was 0.840 and 0.015 respectively (Figure).
Conclusion(s): The models generated via AutoML to predict post-operative CDI had discriminatory characteristics greater than or equal to those models reported in the literature. Future post-operative CDI models may benefit from automated machine learning techniques
EMBASE:641287886
ISSN: 1572-0241
CID: 5514802