Try a new search

Format these results:

Searched for:

person:aphiny01

Total Results:

80


Development and external validation of a dynamic risk score for early prediction of cardiogenic shock in cardiac intensive care units using machine learning

Hu, Yuxuan; Lui, Albert; Goldstein, Mark; Sudarshan, Mukund; Tinsay, Andrea; Tsui, Cindy; Maidman, Samuel D; Medamana, John; Jethani, Neil; Puli, Aahlad; Nguy, Vuthy; Aphinyanaphongs, Yindalon; Kiefer, Nicholas; Smilowitz, Nathaniel R; Horowitz, James; Ahuja, Tania; Fishman, Glenn I; Hochman, Judith; Katz, Stuart; Bernard, Samuel; Ranganath, Rajesh
BACKGROUND:Myocardial infarction and heart failure are major cardiovascular diseases that affect millions of people in the US with the morbidity and mortality being highest among patients who develop cardiogenic shock. Early recognition of cardiogenic shock allows prompt implementation of treatment measures. Our objective is to develop a new dynamic risk score, called CShock, to improve early detection of cardiogenic shock in cardiac intensive care unit (ICU). METHODS:We developed and externally validated a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict onset of cardiogenic shock. We prepared a cardiac ICU dataset using MIMIC-III database by annotating with physician adjudicated outcomes. This dataset that consisted of 1500 patients with 204 having cardiogenic/mixed shock was then used to train CShock. The features used to train the model for CShock included patient demographics, cardiac ICU admission diagnoses, routinely measured laboratory values and vital signs, and relevant features manually extracted from echocardiogram and left heart catheterization reports. We externally validated the risk model on the New York University (NYU) Langone Health cardiac ICU database that was also annotated with physician adjudicated outcomes. The external validation cohort consisted of 131 patients with 25 patients experiencing cardiogenic/mixed shock. RESULTS:CShock achieved an area under the receiver operator characteristic curve (AUROC) of 0.821 (95% CI 0.792-0.850). CShock was externally validated in the more contemporary NYU cohort and achieved an AUROC of 0.800 (95% CI 0.717-0.884), demonstrating its generalizability in other cardiac ICUs. Having an elevated heart rate is most predictive of cardiogenic shock development based on Shapley values. The other top ten predictors are having an admission diagnosis of myocardial infarction with ST-segment elevation, having an admission diagnosis of acute decompensated heart failure, Braden Scale, Glasgow Coma Scale, Blood urea nitrogen, Systolic blood pressure, Serum chloride, Serum sodium, and Arterial blood pH. CONCLUSIONS:The novel CShock score has the potential to provide automated detection and early warning for cardiogenic shock and improve the outcomes for the millions of patients who suffer from myocardial infarction and heart failure.
PMID: 38518758
ISSN: 2048-8734
CID: 5640892

Marketing and US Food and Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical Devices: A Systematic Review

Clark, Phoebe; Kim, Jayne; Aphinyanaphongs, Yindalon
IMPORTANCE:The marketing of health care devices enabled for use with artificial intelligence (AI) or machine learning (ML) is regulated in the US by the US Food and Drug Administration (FDA), which is responsible for approving and regulating medical devices. Currently, there are no uniform guidelines set by the FDA to regulate AI- or ML-enabled medical devices, and discrepancies between FDA-approved indications for use and device marketing require articulation. OBJECTIVE:To explore any discrepancy between marketing and 510(k) clearance of AI- or ML-enabled medical devices. EVIDENCE REVIEW:This systematic review was a manually conducted survey of 510(k) approval summaries and accompanying marketing materials of devices approved between November 2021 and March 2022, conducted between March and November 2022, following the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guideline. Analysis focused on the prevalence of discrepancies between marketing and certification material for AI/ML enabled medical devices. FINDINGS:A total of 119 FDA 510(k) clearance summaries were analyzed in tandem with their respective marketing materials. The devices were taxonomized into 3 individual categories of adherent, contentious, and discrepant devices. A total of 15 devices (12.61%) were considered discrepant, 8 devices (6.72%) were considered contentious, and 96 devices (84.03%) were consistent between marketing and FDA 510(k) clearance summaries. Most devices were from the radiological approval committees (75 devices [82.35%]), with 62 of these devices (82.67%) adherent, 3 (4.00%) contentious, and 10 (13.33%) discrepant; followed by the cardiovascular device approval committee (23 devices [19.33%]), with 19 of these devices (82.61%) considered adherent, 2 contentious (8.70%) and 2 discrepant (8.70%). The difference between these 3 categories in cardiovascular and radiological devices was statistically significant (P < .001). CONCLUSIONS AND RELEVANCE:In this systematic review, low adherence rates within committees were observed most often in committees with few AI- or ML-enabled devices. and discrepancies between clearance documentation and marketing material were present in one-fifth of devices surveyed.
PMID: 37405771
ISSN: 2574-3805
CID: 5536832

Health system-scale language models are all-purpose prediction engines

Jiang, Lavender Yao; Liu, Xujin Chris; Nejatian, Nima Pour; Nasir-Moin, Mustafa; Wang, Duo; Abidin, Anas; Eaton, Kevin; Riina, Howard Antony; Laufer, Ilya; Punjabi, Paawan; Miceli, Madeline; Kim, Nora C; Orillac, Cordelia; Schnurman, Zane; Livia, Christopher; Weiss, Hannah; Kurland, David; Neifert, Sean; Dastagirzada, Yosef; Kondziolka, Douglas; Cheung, Alexander T M; Yang, Grace; Cao, Ming; Flores, Mona; Costa, Anthony B; Aphinyanaphongs, Yindalon; Cho, Kyunghyun; Oermann, Eric Karl
Physicians make critical time-constrained decisions every day. Clinical predictive models can help physicians and administrators make decisions by forecasting clinical and operational events. Existing structured data-based clinical predictive models have limited use in everyday practice owing to complexity in data processing, as well as model development and deployment1-3. Here we show that unstructured clinical notes from the electronic health record can enable the training of clinical language models, which can be used as all-purpose clinical predictive engines with low-resistance development and deployment. Our approach leverages recent advances in natural language processing4,5 to train a large language model for medical language (NYUTron) and subsequently fine-tune it across a wide range of clinical and operational predictive tasks. We evaluated our approach within our health system for five such tasks: 30-day all-cause readmission prediction, in-hospital mortality prediction, comorbidity index prediction, length of stay prediction, and insurance denial prediction. We show that NYUTron has an area under the curve (AUC) of 78.7-94.9%, with an improvement of 5.36-14.7% in the AUC compared with traditional models. We additionally demonstrate the benefits of pretraining with clinical text, the potential for increasing generalizability to different sites through fine-tuning and the full deployment of our system in a prospective, single-arm trial. These results show the potential for using clinical language models in medicine to read alongside physicians and provide guidance at the point of care.
PMCID:10338337
PMID: 37286606
ISSN: 1476-4687
CID: 5536672

Methods and Impact for Using Federated Learning to Collaborate on Clinical Research

Cheung, Alexander T M; Nasir-Moin, Mustafa; Fred Kwon, Young Joon; Guan, Jiahui; Liu, Chris; Jiang, Lavender; Raimondo, Christian; Chotai, Silky; Chambless, Lola; Ahmad, Hasan S; Chauhan, Daksh; Yoon, Jang W; Hollon, Todd; Buch, Vivek; Kondziolka, Douglas; Chen, Dinah; Al-Aswad, Lama A; Aphinyanaphongs, Yindalon; Oermann, Eric Karl
BACKGROUND:The development of accurate machine learning algorithms requires sufficient quantities of diverse data. This poses a challenge in health care because of the sensitive and siloed nature of biomedical information. Decentralized algorithms through federated learning (FL) avoid data aggregation by instead distributing algorithms to the data before centrally updating one global model. OBJECTIVE:To establish a multicenter collaboration and assess the feasibility of using FL to train machine learning models for intracranial hemorrhage (ICH) detection without sharing data between sites. METHODS:Five neurosurgery departments across the United States collaborated to establish a federated network and train a convolutional neural network to detect ICH on computed tomography scans. The global FL model was benchmarked against a standard, centrally trained model using a held-out data set and was compared against locally trained models using site data. RESULTS:A federated network of practicing neurosurgeon scientists was successfully initiated to train a model for predicting ICH. The FL model achieved an area under the ROC curve of 0.9487 (95% CI 0.9471-0.9503) when predicting all subtypes of ICH compared with a benchmark (non-FL) area under the ROC curve of 0.9753 (95% CI 0.9742-0.9764), although performance varied by subtype. The FL model consistently achieved top three performance when validated on any site's data, suggesting improved generalizability. A qualitative survey described the experience of participants in the federated network. CONCLUSION/CONCLUSIONS:This study demonstrates the feasibility of implementing a federated network for multi-institutional collaboration among clinicians and using FL to conduct machine learning research, thereby opening a new paradigm for neurosurgical collaboration.
PMID: 36399428
ISSN: 1524-4040
CID: 5385002

Ten Years of Health Informatics Education for Physicians

Chapter by: Major, Vincent J.; Plottel, Claudia S.; Aphinyanaphongs, Yindalon
in: Proceedings - 2023 IEEE 11th International Conference on Healthcare Informatics, ICHI 2023 by
[S.l.] : Institute of Electrical and Electronics Engineers Inc., 2023
pp. 637-644
ISBN: 9798350302639
CID: 5630952

Enabling AI-Augmented Clinical Workflows by Accessing Patient Data in Real-Time with FHIR

Chapter by: Major, Vincent J.; Wang, Walter; Aphinyanaphongs, Yindalon
in: Proceedings - 2023 IEEE 11th International Conference on Healthcare Informatics, ICHI 2023 by
[S.l.] : Institute of Electrical and Electronics Engineers Inc., 2023
pp. 531-533
ISBN: 9798350302639
CID: 5630942

AI model transferability in healthcare: a sociotechnical perspective

Wiesenfeld, Batia Mishan; Aphinyanaphongs, Yin; Nov, Oded
SCOPUS:85139986644
ISSN: 2522-5839
CID: 5350312

Predicting Post-Operative C. difficile Infection (CDI) With Automated Machine Learning (AutoML) Algorithms Using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) Database [Meeting Abstract]

Thangirala, A; Li, T; Abaza, E; Aphinyanaphongs, Y; Axelrad, J; Chen, J; Kelleher, A; Oeding, J; Hu, E; Martin, J; Katz, G; Brejt, S; Castillo, G; Ostberg, N; Kan, K
Introduction: Clostridium difficile infection (CDI) is one of the most common hospital-acquired infections leading to prolonged hospitalization and significant morbidity. Only a few prior studies have developed predictive risk models for CDI and all but one have utilized logistic regression (LR) models to identify risk factors. Automated machine learning (AutoML) programs consistently outperform standard LR models in non-medical contexts. This study aims to investigate the utility of AutoML methods in developing a model for post-operative CDI prediction.
Method(s): We used an AutoML system developed by Amazon, Autogluon v0.3.1, to evaluate the prediction accuracy of post-surgical CDI using the 2016-2018 ACS NSQIP database. A total of A total of 3,049,617 patients and 79 pre-operative features were included in the model. Post-operative CDI was defined as CDI within 30 days of surgery. Models were trained for 4 hours to optimize performance on the Brier score, with lower being better. Validation of all performance metrics was done using the 2019 NSQIP database.
Result(s): 0.36% of the patients (n = 11,001) developed post-operative CDI. Brier scores were calculated for each model with the top performing model being an ensembled neural net model having a Brier score of 0.0027 on the test set. The corresponding AUROC and AUC-PR was 0.840 and 0.015 respectively (Figure).
Conclusion(s): The models generated via AutoML to predict post-operative CDI had discriminatory characteristics greater than or equal to those models reported in the literature. Future post-operative CDI models may benefit from automated machine learning techniques
EMBASE:641287886
ISSN: 1572-0241
CID: 5514802

Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation

Schaye, Verity; Guzman, Benedict; Burk-Rafel, Jesse; Marin, Marina; Reinstein, Ilan; Kudlowitz, David; Miller, Louis; Chun, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND:Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE:The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES/UNASSIGNED:Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS/RESULTS:The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS:The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
PMCID:9296753
PMID: 35710676
ISSN: 1525-1497
CID: 5277902

Evaluating the Effect of a COVID-19 Predictive Model to Facilitate Discharge: A Randomized Controlled Trial

Major, Vincent J; Jones, Simon A; Razavian, Narges; Bagheri, Ashley; Mendoza, Felicia; Stadelman, Jay; Horwitz, Leora I; Austrian, Jonathan; Aphinyanaphongs, Yindalon
BACKGROUND: We previously developed and validated a predictive model to help clinicians identify hospitalized adults with coronavirus disease 2019 (COVID-19) who may be ready for discharge given their low risk of adverse events. Whether this algorithm can prompt more timely discharge for stable patients in practice is unknown. OBJECTIVES/OBJECTIVE: The aim of the study is to estimate the effect of displaying risk scores on length of stay (LOS). METHODS: We integrated model output into the electronic health record (EHR) at four hospitals in one health system by displaying a green/orange/red score indicating low/moderate/high-risk in a patient list column and a larger COVID-19 summary report visible for each patient. Display of the score was pseudo-randomized 1:1 into intervention and control arms using a patient identifier passed to the model execution code. Intervention effect was assessed by comparing LOS between intervention and control groups. Adverse safety outcomes of death, hospice, and re-presentation were tested separately and as a composite indicator. We tracked adoption and sustained use through daily counts of score displays. RESULTS: Enrolling 1,010 patients from May 15, 2020 to December 7, 2020, the trial found no detectable difference in LOS. The intervention had no impact on safety indicators of death, hospice or re-presentation after discharge. The scores were displayed consistently throughout the study period but the study lacks a causally linked process measure of provider actions based on the score. Secondary analysis revealed complex dynamics in LOS temporally, by primary symptom, and hospital location. CONCLUSION/CONCLUSIONS: An AI-based COVID-19 risk score displayed passively to clinicians during routine care of hospitalized adults with COVID-19 was safe but had no detectable impact on LOS. Health technology challenges such as insufficient adoption, nonuniform use, and provider trust compounded with temporal factors of the COVID-19 pandemic may have contributed to the null result. TRIAL REGISTRATION/BACKGROUND: ClinicalTrials.gov identifier: NCT04570488.
PMCID:9329139
PMID: 35896506
ISSN: 1869-0327
CID: 5276672