Try a new search

Format these results:

Searched for:

in-biosketch:true

person:oermae01

Total Results:

149


Automating the Referral of Bone Metastases Patients With and Without the Use of Large Language Models

Sangwon, Karl L; Han, Xu; Becker, Anton; Zhang, Yuchong; Ni, Richard; Zhang, Jeff; Alber, Daniel Alexander; Alyakin, Anton; Nakatsuka, Michelle; Fabbri, Nicola; Aphinyanaphongs, Yindalon; Yang, Jonathan T; Chachoua, Abraham; Kondziolka, Douglas; Laufer, Ilya; Oermann, Eric Karl
BACKGROUND AND OBJECTIVES/OBJECTIVE:Bone metastases, affecting more than 4.8% of patients with cancer annually, and particularly spinal metastases require urgent intervention to prevent neurological complications. However, the current process of manually reviewing radiological reports leads to potential delays in specialist referrals. We hypothesized that natural language processing (NLP) review of routine radiology reports could automate the referral process for timely multidisciplinary care of spinal metastases. METHODS:We assessed 3 NLP models-a rule-based regular expression (RegEx) model, GPT-4, and a specialized Bidirectional Encoder Representations from Transformers (BERT) model (NYUTron)-for automated detection and referral of bone metastases. Study inclusion criteria targeted patients with active cancer diagnoses who underwent advanced imaging (computed tomography, MRI, or positron emission tomography) without previous specialist referral. We defined 2 separate tasks: task of identifying clinically significant bone metastatic terms (lexical detection), and identifying cases needing a specialist follow-up (clinical referral). Models were developed using 3754 hand-labeled advanced imaging studies in 2 phases: phase 1 focused on spine metastases, and phase 2 generalized to bone metastases. Standard McRae's line performance metrics were evaluated and compared across all stages and tasks. RESULTS:In the lexical detection, a simple RegEx achieved the highest performance (sensitivity 98.4%, specificity 97.6%, F1 = 0.965), followed by NYUTron (sensitivity 96.8%, specificity 89.9%, and F1 = 0.787). For the clinical referral task, RegEx also demonstrated superior performance (sensitivity 92.3%, specificity 87.5%, and F1 = 0.936), followed by a fine-tuned NYUTron model (sensitivity 90.0%, specificity 66.7%, and F1 = 0.750). CONCLUSION/CONCLUSIONS:An NLP-based automated referral system can accurately identify patients with bone metastases requiring specialist evaluation. A simple RegEx model excels in syntax-based identification and expert-informed rule generation for efficient referral patient recommendation in comparison with advanced NLP models. This system could significantly reduce missed follow-ups and enhance timely intervention for patients with bone metastases.
PMID: 40823772
ISSN: 1524-4040
CID: 5908782

Introduction. Artificial intelligence in neurosurgery: transforming a data-intensive specialty

Hopkins, Benjamin S; Sutherland, Garnette R; Browd, Samuel R; Donoho, Daniel A; Oermann, Eric K; Schirmer, Clemens M; Pennicooke, Brenton; Asaad, Wael F
PMID: 40591964
ISSN: 1092-0684
CID: 5887762

Outcomes of concurrent versus non-concurrent immune checkpoint inhibition with stereotactic radiosurgery for melanoma brain metastases

Fu, Allen Ye; Bernstein, Kenneth; Zhang, Jeff; Silverman, Joshua; Mehnert, Janice; Sulman, Erik P; Oermann, Eric Karl; Kondziolka, Douglas
PURPOSE/OBJECTIVE:Immune checkpoint inhibition (ICI) has revolutionized the treatment of melanoma care. Stereotactic radiosurgery combined with ICI has shown promise to improve clinical outcomes in prior studies in patients who have metastatic melanoma with brain metastases. However, others have suggested that concurrent ICI with stereotactic radiosurgery can increase the risk of complications. METHODS:We present a retrospective, single-institution analysis of 98 patients with a median follow up of 17.1 months managed with immune checkpoint inhibition and stereotactic radiosurgery concurrently and non-concurrently. A total of 55 patients were included in the concurrent group and 43 patients in the non-concurrent treatment group. Cox proportional hazards models were used to assess the relation between concurrent or non-concurrent treatment and overall survival or local progression-free survival. The Wald test was used to assess significance. Significant differences between patients in both groups experiencing adverse events including adverse radiation effects, perilesional edema, and neurological deficits were tested for using the Chi-square or Fisher's exact test. RESULTS:Patients receiving concurrent versus non-concurrent ICI showed a significant increase in overall survival (median 37.1 months, 95% CI: 18.9 months - NA versus median 11.4 months, 95% CI: 6.4-33.2 months, p = 0.0056) but not local progression-free survival. There were no significant differences between groups with regards to adverse radiation effects (2% versus 3%), perilesional edema (20% versus 9%), neurological deficits (3% versus 20%). CONCLUSION/CONCLUSIONS:These results suggest that the timing of ICI does not increase risk of neurological complications when delivered within 4 weeks of SRS.
PMID: 40183901
ISSN: 1573-7373
CID: 5819412

Large-Scale Multi-omic Biosequence Transformers for Modeling Protein-Nucleic Acid Interactions

Chen, Sully F; Steele, Robert J; Hocky, Glen M; Lemeneh, Beakal; Lad, Shivanand P; Oermann, Eric K
The transformer architecture has revolutionized bioinformatics and driven progress in the understanding and prediction of the properties of biomolecules. To date, most biosequence transformers have been trained on a single omic-either proteins or nucleic acids and have seen incredible success in downstream tasks in each domain with particularly noteworthy breakthroughs in protein structural modeling. However, single-omic pre-training limits the ability of these models to capture cross-modal interactions. Here we present OmniBioTE, the largest open-source multi-omic model trained on over 250 billion tokens of mixed protein and nucleic acid data. We show that despite only being trained on unlabelled sequence data, OmniBioTE learns joint representations consistent with the central dogma of molecular biology. We further demonstrate that OmbiBioTE achieves state-of-the-art results predicting the change in Gibbs free energy (∆G) of the binding interaction between a given nucleic acid and protein. Remarkably, we show that multi-omic biosequence transformers emergently learn useful structural information without any a priori structural training, allowing us to predict which protein residues are most involved in the protein-nucleic acid binding interaction. Lastly, compared to single-omic controls trained with identical compute, OmniBioTE demonstrates superior performance-per-FLOP and absolute accuracy across both multi-omic and single-omic benchmarks, highlighting the power of a unified modeling approach for biological sequences.
PMCID:11998858
PMID: 40236839
ISSN: 2331-8422
CID: 5883432

CNS-CLIP: Transforming a Neurosurgical Journal Into a Multimodal Medical Model

Alyakin, Anton; Kurland, David; Alber, Daniel Alexander; Sangwon, Karl L; Li, Danxun; Tsirigos, Aristotelis; Leuthardt, Eric; Kondziolka, Douglas; Oermann, Eric Karl
BACKGROUND AND OBJECTIVES/OBJECTIVE:Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications. METHODS:We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification. RESULTS:CNS-CLIP demonstrated superior performance in neurosurgical information retrieval with a Top-1 accuracy of 24.56%, compared with 8.61% for the baseline. The average area under receiver operating characteristic across 6 neuroradiology tasks achieved by CNS-CLIP was 0.95, slightly superior to OpenAI's Contrastive Language-Image Pretraining at 0.94 and significantly outperforming a vanilla vision transformer at 0.62. In generalist classification, CNS-CLIP reached a Top-1 accuracy of 47.55%, a decrease from the baseline of 52.37%, demonstrating a catastrophic forgetting phenomenon. CONCLUSION/CONCLUSIONS:This study presents a pioneering effort in building a domain-specific multimodal model using data from a medical society publication. The results indicate that domain-specific models, while less globally versatile, can offer advantages in specialized contexts. This emphasizes the importance of using tailored data and domain-focused development in training foundation models in neurosurgery and general medicine.
PMID: 39636129
ISSN: 1524-4040
CID: 5780182

MetaGP: A generative foundation model integrating electronic health records and multimodal imaging for addressing unmet clinical needs

Liu, Fei; Zhou, Hongyu; Wang, Kai; Yu, Yunfang; Gao, Yuanxu; Sun, Zhuo; Liu, Sian; Sun, Shanshan; Zou, Zixing; Li, Zhuomin; Li, Bingzhou; Miao, Hanpei; Liu, Yang; Hou, Taiwa; Fok, Manson; Patil, Nivritti Gajanan; Xue, Kanmin; Li, Ting; Oermann, Eric; Yin, Yun; Duan, Lian; Qu, Jia; Huang, Xiaoying; Jin, Shengwei; Zhang, Kang
Artificial intelligence makes strides in specialized diagnostics but faces challenges in complex clinical scenarios, such as rare disease diagnosis and emergency condition identification. To address these limitations, we develop Meta General Practitioner (MetaGP), a 32-billion-parameter generative foundation model trained on extensive datasets, including over 8 million electronic health records, biomedical literature, and medical textbooks. MetaGP demonstrates robust diagnostic capabilities, achieving accuracy comparable to experienced clinicians. In rare disease cases, it achieves an average diagnostic score of 1.57, surpassing GPT-4's 0.93. For emergency conditions, it improves diagnostic accuracy for junior and mid-level clinicians by 53% and 46%, respectively. MetaGP also excels in generating medical imaging reports, producing high-quality outputs for chest X-rays and computed tomography, often rated comparable to or superior to physician-authored reports. These findings highlight MetaGP's potential to transform clinical decision-making across diverse medical contexts.
PMID: 40187356
ISSN: 2666-3791
CID: 5819502

Generalizability of Kidney Transplant Data in Electronic Health Records - The Epic Cosmos Database versus the Scientific Registry of Transplant Recipients

Mankowski, Michal A; Bae, Sunjae; Strauss, Alexandra T; Lonze, Bonnie E; Orandi, Babak J; Stewart, Darren; Massie, Allan B; McAdams-DeMarco, Mara A; Oermann, Eric K; Habal, Marlena; Iturrate, Eduardo; Gentry, Sommer E; Segev, Dorry L; Axelrod, David
Developing real-world evidence from electronic health records (EHR) is vital to advance kidney transplantation (KT). We assessed the feasibility of studying KT using the Epic Cosmos aggregated EHR dataset, which includes 274 million unique individuals cared for in 238 U.S. health systems, by comparing it with the Scientific Registry of Transplant Recipients (SRTR). We identified 69,418 KT recipients transplanted between January 2014 and December 2022 in Cosmos (39.4% of all US KT transplants during this period). Demographics and clinical characteristics of recipients captured in Cosmos were consistent with the overall SRTR cohort. Survival estimates were generally comparable, although there were some differences in long-term survival. At 7 years post-transplant, patient survival was 80.4% in Cosmos and 77.8% in SRTR. Multivariable Cox regression showed consistent associations between clinical factors and mortality in both cohorts, with minor discrepancies in the associations between death and both age and race. In summary, Cosmos provides a reliable platform for KT research, allowing EHR-level clinical granularity not available with either the transplant registry or healthcare claims. Consequently, Cosmos will enable novel analyses to improve our understanding of KT management on a national scale.
PMID: 39550008
ISSN: 1600-6143
CID: 5754062

Trials and Tribulations: Responses of ChatGPT to Patient Questions About Kidney Transplantation

Xu, Jingzhi; Mankowski, Michal; Vanterpool, Karen B; Strauss, Alexandra T; Lonze, Bonnie E; Orandi, Babak J; Stewart, Darren; Bae, Sunjae; Ali, Nicole; Stern, Jeffrey; Mattoo, Aprajita; Robalino, Ryan; Soomro, Irfana; Weldon, Elaina; Oermann, Eric K; Aphinyanaphongs, Yin; Sidoti, Carolyn; McAdams-DeMarco, Mara; Massie, Allan B; Gentry, Sommer E; Segev, Dorry L; Levan, Macey L
PMID: 39477825
ISSN: 1534-6080
CID: 5747132

Is It Really "Artificial" Intelligence?

Kondziolka, Douglas; Oermann, Eric K
PMID: 39812480
ISSN: 1524-4040
CID: 5883422

Self-improving generative foundation model for synthetic medical image generation and clinical applications

Wang, Jinzhuo; Wang, Kai; Yu, Yunfang; Lu, Yuxing; Xiao, Wenchao; Sun, Zhuo; Liu, Fei; Zou, Zixing; Gao, Yuanxu; Yang, Lei; Zhou, Hong-Yu; Miao, Hanpei; Zhao, Wenting; Huang, Lisha; Zeng, Lingchao; Guo, Rui; Chong, Ieng; Deng, Boyu; Cheng, Linling; Chen, Xiaoniao; Luo, Jing; Zhu, Meng-Hua; Baptista-Hon, Daniel; Monteiro, Olivia; Li, Ming; Ke, Yu; Li, Jiahui; Zeng, Simiao; Guan, Taihua; Zeng, Jin; Xue, Kanmin; Oermann, Eric; Luo, Huiyan; Yin, Yun; Zhang, Kang; Qu, Jia
In many clinical and research settings, the scarcity of high-quality medical imaging datasets has hampered the potential of artificial intelligence (AI) clinical applications. This issue is particularly pronounced in less common conditions, underrepresented populations and emerging imaging modalities, where the availability of diverse and comprehensive datasets is often inadequate. To address this challenge, we introduce a unified medical image-text generative model called MINIM that is capable of synthesizing medical images of various organs across various imaging modalities based on textual instructions. Clinician evaluations and rigorous objective measurements validate the high quality of MINIM's synthetic images. MINIM exhibits an enhanced generative capability when presented with previously unseen data domains, demonstrating its potential as a generalist medical AI (GMAI). Our findings show that MINIM's synthetic images effectively augment existing datasets, boosting performance across multiple medical applications such as diagnostics, report generation and self-supervised learning. On average, MINIM enhances performance by 12% for ophthalmic, 15% for chest, 13% for brain and 17% for breast-related tasks. Furthermore, we demonstrate MINIM's potential clinical utility in the accurate prediction of HER2-positive breast cancer from MRI images. Using a large retrospective simulation analysis, we demonstrate MINIM's clinical potential by accurately identifying targeted therapy-sensitive EGFR mutations using lung cancer computed tomography images, which could potentially lead to improved 5-year survival rates. Although these results are promising, further validation and refinement in more diverse and prospective settings would greatly enhance the model's generalizability and robustness.
PMID: 39663467
ISSN: 1546-170x
CID: 5762792