Searched for: in-biosketch:true
person:oermae01
Generalizability of Kidney Transplant Data in Electronic Health Records - The Epic Cosmos Database versus the Scientific Registry of Transplant Recipients
Mankowski, Michal A; Bae, Sunjae; Strauss, Alexandra T; Lonze, Bonnie E; Orandi, Babak J; Stewart, Darren; Massie, Allan B; McAdams-DeMarco, Mara A; Oermann, Eric K; Habal, Marlena; Iturrate, Eduardo; Gentry, Sommer E; Segev, Dorry L; Axelrod, David
Developing real-world evidence from electronic health records (EHR) is vital to advance kidney transplantation (KT). We assessed the feasibility of studying KT using the Epic Cosmos aggregated EHR dataset, which includes 274 million unique individuals cared for in 238 U.S. health systems, by comparing it with the Scientific Registry of Transplant Recipients (SRTR). We identified 69,418 KT recipients transplanted between January 2014 and December 2022 in Cosmos (39.4% of all US KT transplants during this period). Demographics and clinical characteristics of recipients captured in Cosmos were consistent with the overall SRTR cohort. Survival estimates were generally comparable, although there were some differences in long-term survival. At 7 years post-transplant, patient survival was 80.4% in Cosmos and 77.8% in SRTR. Multivariable Cox regression showed consistent associations between clinical factors and mortality in both cohorts, with minor discrepancies in the associations between death and both age and race. In summary, Cosmos provides a reliable platform for KT research, allowing EHR-level clinical granularity not available with either the transplant registry or healthcare claims. Consequently, Cosmos will enable novel analyses to improve our understanding of KT management on a national scale.
PMID: 39550008
ISSN: 1600-6143
CID: 5754062
Trials and Tribulations: Responses of ChatGPT to Patient Questions About Kidney Transplantation
Xu, Jingzhi; Mankowski, Michal; Vanterpool, Karen B; Strauss, Alexandra T; Lonze, Bonnie E; Orandi, Babak J; Stewart, Darren; Bae, Sunjae; Ali, Nicole; Stern, Jeffrey; Mattoo, Aprajita; Robalino, Ryan; Soomro, Irfana; Weldon, Elaina; Oermann, Eric K; Aphinyanaphongs, Yin; Sidoti, Carolyn; McAdams-DeMarco, Mara; Massie, Allan B; Gentry, Sommer E; Segev, Dorry L; Levan, Macey L
PMID: 39477825
ISSN: 1534-6080
CID: 5747132
Is It Really "Artificial" Intelligence?
Kondziolka, Douglas; Oermann, Eric K
PMID: 39812480
ISSN: 1524-4040
CID: 5883422
Medical large language models are vulnerable to data-poisoning attacks
Alber, Daniel Alexander; Yang, Zihao; Alyakin, Anton; Yang, Eunice; Rai, Sumedha; Valliani, Aly A; Zhang, Jeff; Rosenbaum, Gabriel R; Amend-Thomas, Ashley K; Kurland, David B; Kremer, Caroline M; Eremiev, Alexander; Negash, Bruck; Wiggan, Daniel D; Nakatsuka, Michelle A; Sangwon, Karl L; Neifert, Sean N; Khan, Hammad A; Save, Akshay Vinod; Palla, Adhith; Grin, Eric A; Hedman, Monika; Nasir-Moin, Mustafa; Liu, Xujin Chris; Jiang, Lavender Yao; Mankowski, Michal A; Segev, Dorry L; Aphinyanaphongs, Yindalon; Riina, Howard A; Golfinos, John G; Orringer, Daniel A; Kondziolka, Douglas; Oermann, Eric Karl
The adoption of large language models (LLMs) in healthcare demands a careful analysis of their potential to spread false medical knowledge. Because LLMs ingest massive volumes of data from the open Internet during training, they are potentially exposed to unverified medical knowledge that may include deliberately planted misinformation. Here, we perform a threat assessment that simulates a data-poisoning attack against The Pile, a popular dataset used for LLM development. We find that replacement of just 0.001% of training tokens with medical misinformation results in harmful models more likely to propagate medical errors. Furthermore, we discover that corrupted models match the performance of their corruption-free counterparts on open-source benchmarks routinely used to evaluate medical LLMs. Using biomedical knowledge graphs to screen medical LLM outputs, we propose a harm mitigation strategy that captures 91.9% of harmful content (F1 = 85.7%). Our algorithm provides a unique method to validate stochastically generated LLM outputs against hard-coded relationships in knowledge graphs. In view of current calls for improved data provenance and transparent LLM development, we hope to raise awareness of emergent risks from LLMs trained indiscriminately on web-scraped data, particularly in healthcare where misinformation can potentially compromise patient safety.
PMID: 39779928
ISSN: 1546-170x
CID: 5782182
Self-improving generative foundation model for synthetic medical image generation and clinical applications
Wang, Jinzhuo; Wang, Kai; Yu, Yunfang; Lu, Yuxing; Xiao, Wenchao; Sun, Zhuo; Liu, Fei; Zou, Zixing; Gao, Yuanxu; Yang, Lei; Zhou, Hong-Yu; Miao, Hanpei; Zhao, Wenting; Huang, Lisha; Zeng, Lingchao; Guo, Rui; Chong, Ieng; Deng, Boyu; Cheng, Linling; Chen, Xiaoniao; Luo, Jing; Zhu, Meng-Hua; Baptista-Hon, Daniel; Monteiro, Olivia; Li, Ming; Ke, Yu; Li, Jiahui; Zeng, Simiao; Guan, Taihua; Zeng, Jin; Xue, Kanmin; Oermann, Eric; Luo, Huiyan; Yin, Yun; Zhang, Kang; Qu, Jia
In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalities, where the availability of diverse and comprehensive datasets is often inadequate. To address this challenge, we introduce a unified medical image-text generative model called MINIM that is capable of synthesizing medical images of various organs across various imaging modalities based on textual instructions. Clinician evaluations and rigorous objective measurements validate the high quality of MINIM's synthetic images. MINIM exhibits an enhanced generative capability when presented with previously unseen data domains, demonstrating its potential as a generalist medical AI (GMAI). Our findings show that MINIM's synthetic images effectively augment existing datasets, boosting performance across multiple medical applications such as diagnostics, report generation and self-supervised learning. On average, MINIM enhances performance by 12% for ophthalmic, 15% for chest, 13% for brain and 17% for breast-related tasks. Furthermore, we demonstrate MINIM's potential clinical utility in the accurate prediction of HER2-positive breast cancer from MRI images. Using a large retrospective simulation analysis, we demonstrate MINIM's clinical potential by accurately identifying targeted therapy-sensitive EGFR mutations using lung cancer computed tomography images, which could potentially lead to improved 5-year survival rates. Although these results are promising, further validation and refinement in more diverse and prospective settings would greatly enhance the model's generalizability and robustness.
PMID: 39663467
ISSN: 1546-170x
CID: 5762792
Augmenting Large Language Models With Automated, Bibliometrics-Powered Literature Search for Knowledge Distillation: A Pilot Study for Common Spinal Pathologies
Kurland, David B; Alber, Daniel A; Palla, Adhith; de Souza, Daniel N; Lau, Darryl; Laufer, Ilya; Frempong-Boadu, Anthony K; Kondziolka, Douglas; Oermann, Eric K
BACKGROUND AND OBJECTIVES/OBJECTIVE:Scholarly output is accelerating in medical domains, making it challenging to keep up with the latest neurosurgical literature. The emergence of large language models (LLMs) has facilitated rapid, high-quality text summarization. However, LLMs cannot autonomously conduct literature reviews and are prone to hallucinating source material. We devised a novel strategy that combines Reference Publication Year Spectroscopy-a bibliometric technique for identifying foundational articles within a corpus-with LLMs to automatically summarize and cite salient details from articles. We demonstrate our approach for four common spinal conditions in a proof of concept. METHODS:Reference Publication Year Spectroscopy identified seminal articles from the corpora of literature for cervical myelopathy, lumbar radiculopathy, lumbar stenosis, and adjacent segment disease. The article text was split into 1024-token chunks. Queries from three knowledge domains (surgical management, pathophysiology, and natural history) were constructed. The most relevant article chunks for each query were retrieved from a vector database using chain-of-thought prompting. LLMs automatically summarized the literature into a comprehensive narrative with fully referenced facts and statistics. Information was verified through manual review, and spine surgery faculty were surveyed for qualitative feedback. RESULTS:Our tandem approach cost less than $1 for each condition and ran within 5 minutes. Generative Pre-trained Transformer-4 was the best-performing model, with a near-perfect 97.5% citation accuracy. Surveys of spine faculty helped refine the prompting scheme to improve the cohesion and accessibility summaries. The final artificial intelligence-generated text provided high-fidelity summaries of each pathology's most clinically relevant information. CONCLUSION/CONCLUSIONS:We demonstrate the rapid, automated summarization of seminal articles for four common spinal pathologies, with a generalizable workflow implemented using consumer-grade hardware. Our tandem strategy fuses bibliometrics and artificial intelligence to bridge the gap toward fully automated knowledge distillation, obviating the need for manual literature review and article selection.
PMID: 40662770
ISSN: 1524-4040
CID: 5897082
Predicting STA-MCA Anastomosis Success: Insights from FLOW 800 Hemodynamics [Letter]
Sangwon, Karl L; Oermann, Eric K; Nossek, Erez
PMID: 39307270
ISSN: 1878-8769
CID: 5766452
Economics and Equity of Large Language Models: Health Care Perspective
Nagarajan, Radha; Kondo, Midori; Salas, Franz; Sezgin, Emre; Yao, Yuan; Klotzman, Vanessa; Godambe, Sandip A; Khan, Naqi; Limon, Alfonso; Stephenson, Graham; Taraman, Sharief; Walton, Nephi; Ehwerhemuepha, Louis; Pandit, Jay; Pandita, Deepti; Weiss, Michael; Golden, Charles; Gold, Adam; Henderson, John; Shippy, Angela; Celi, Leo Anthony; Hogan, William R; Oermann, Eric K; Sanger, Terence; Martel, Steven
Large language models (LLMs) continue to exhibit noteworthy capabilities across a spectrum of areas, including emerging proficiencies across the health care continuum. Successful LLM implementation and adoption depend on digital readiness, modern infrastructure, a trained workforce, privacy, and an ethical regulatory landscape. These factors can vary significantly across health care ecosystems, dictating the choice of a particular LLM implementation pathway. This perspective discusses 3 LLM implementation pathways-training from scratch pathway (TSP), fine-tuned pathway (FTP), and out-of-the-box pathway (OBP)-as potential onboarding points for health systems while facilitating equitable adoption. The choice of a particular pathway is governed by needs as well as affordability. Therefore, the risks, benefits, and economics of these pathways across 4 major cloud service providers (Amazon, Microsoft, Google, and Oracle) are presented. While cost comparisons, such as on-demand and spot pricing across the cloud service providers for the 3 pathways, are presented for completeness, the usefulness of managed services and cloud enterprise tools is elucidated. Managed services can complement the traditional workforce and expertise, while enterprise tools, such as federated learning, can overcome sample size challenges when implementing LLMs using health care data. Of the 3 pathways, TSP is expected to be the most resource-intensive regarding infrastructure and workforce while providing maximum customization, enhanced transparency, and performance. Because TSP trains the LLM using enterprise health care data, it is expected to harness the digital signatures of the population served by the health care system with the potential to impact outcomes. The use of pretrained models in FTP is a limitation. It may impact its performance because the training data used in the pretrained model may have hidden bias and may not necessarily be health care-related. However, FTP provides a balance between customization, cost, and performance. While OBP can be rapidly deployed, it provides minimal customization and transparency without guaranteeing long-term availability. OBP may also present challenges in interfacing seamlessly with downstream applications in health care settings with variations in pricing and use over time. Lack of customization in OBP can significantly limit its ability to impact outcomes. Finally, potential applications of LLMs in health care, including conversational artificial intelligence, chatbots, summarization, and machine translation, are highlighted. While the 3 implementation pathways discussed in this perspective have the potential to facilitate equitable adoption and democratization of LLMs, transitions between them may be necessary as the needs of health systems evolve. Understanding the economics and trade-offs of these onboarding pathways can guide their strategic adoption and demonstrate value while impacting health care outcomes favorably.
PMID: 39541580
ISSN: 1438-8871
CID: 5753562
Hospitalization and Hospitalized Delirium Are Associated With Decreased Access to Kidney Transplantation and Increased Risk of Waitlist Mortality
Long, Jane J; Hong, Jingyao; Liu, Yi; Nalatwad, Akanksha; Li, Yiting; Ghildayal, Nidhi; Johnston, Emily A; Schwartzberg, Jordan; Ali, Nicole; Oermann, Eric; Mankowski, Michal; Gelb, Bruce E; Chanan, Emily L; Chodosh, Joshua L; Mathur, Aarti; Segev, Dorry L; McAdams-DeMarco, Mara A
BACKGROUND:Kidney transplant (KT) candidates often experience hospitalizations, increasing their delirium risk. Hospitalizations and delirium are associated with worse post-KT outcomes, yet their relationship with pre-KT outcomes is less clear. Pre-KT delirium may worsen access to KT due to its negative impact on cognition and ability to maintain overall health. METHODS:Using a prospective cohort of 2374 KT candidates evaluated at a single center (2009-2020), we abstracted hospitalizations and associated delirium records after listing via chart review. We evaluated associations between waitlist mortality and likelihood of KT with hospitalizations and hospitalized delirium using competing risk models and tested whether associations differed by gerontologic factors. RESULTS: < 0.001), with those aged ≥65 having a 61% lower likelihood of KT. CONCLUSION/CONCLUSIONS:Hospitalization and delirium are associated with worse pre-KT outcomes and have serious implications on candidates' access to KT. Providers should work to reduce preventable instances of delirium.
PMID: 39498973
ISSN: 1399-0012
CID: 5766752
ChatGPT Solving Complex Kidney Transplant Cases: A Comparative Study With Human Respondents
Mankowski, Michal A; Jaffe, Ian S; Xu, Jingzhi; Bae, Sunjae; Oermann, Eric K; Aphinyanaphongs, Yindalon; McAdams-DeMarco, Mara A; Lonze, Bonnie E; Orandi, Babak J; Stewart, Darren; Levan, Macey; Massie, Allan; Gentry, Sommer; Segev, Dorry L
INTRODUCTION/BACKGROUND:ChatGPT has shown the ability to answer clinical questions in general medicine but may be constrained by the specialized nature of kidney transplantation. Thus, it is important to explore how ChatGPT can be used in kidney transplantation and how its knowledge compares to human respondents. METHODS:We prompted ChatGPT versions 3.5, 4, and 4 Visual (4 V) with 12 multiple-choice questions related to six kidney transplant cases from 2013 to 2015 American Society of Nephrology (ASN) fellowship program quizzes. We compared the performance of ChatGPT with US nephrology fellowship program directors, nephrology fellows, and the audience of the ASN's annual Kidney Week meeting. RESULTS:Overall, ChatGPT 4 V correctly answered 10 out of 12 questions, showing a performance level comparable to nephrology fellows (group majority correctly answered 9 of 12 questions) and training program directors (11 of 12). This surpassed ChatGPT 4 (7 of 12 correct) and 3.5 (5 of 12). All three ChatGPT versions failed to correctly answer questions where the consensus among human respondents was low. CONCLUSION/CONCLUSIONS:Each iterative version of ChatGPT performed better than the prior version, with version 4 V achieving performance on par with nephrology fellows and training program directors. While it shows promise in understanding and answering kidney transplantation questions, ChatGPT should be seen as a complementary tool to human expertise rather than a replacement.
PMCID:11441623
PMID: 39329220
ISSN: 1399-0012
CID: 5714092