Try a new search

Format these results:

Searched for:

in-biosketch:true

person:oermae01

Total Results:

119


Big Data Defined: A Practical Review for Neurosurgeons

Bydon, Mohamad; Schirmer, Clemens M; Oermann, Eric K; Kitagawa, Ryan S; Pouratian, Nader; Davies, Jason; Sharan, Ashwini; Chambless, Lola B
BACKGROUND:Modern science and healthcare generate vast amounts of data, and, coupled with the increasingly inexpensive and accessible computing, a tremendous opportunity exists to use these data to improve care. A better understanding of data science and its relationship to neurosurgical practice will be increasingly important as we transition into this modern "big data" era. METHODS:A review of the literature was performed for key articles referencing big data for neurosurgical care or related topics. RESULTS:In the present report, we first defined the nature and scope of data science from a technical perspective. We then discussed its relationship to the modern neurosurgical practice, highlighting key references, which might form a useful introductory reading list. CONCLUSIONS:Numerous challenges exist going forward; however, organized neurosurgery has an important role in fostering and facilitating these efforts to merge data science with neurosurgical practice.
PMID: 31562965
ISSN: 1878-8769
CID: 4491542

Deep Learning and Neurology: A Systematic Review

Valliani, Aly Al-Amyn; Ranti, Daniel; Oermann, Eric Karl
Deciphering the massive volume of complex electronic data that has been compiled by hospital systems over the past decades has the potential to revolutionize modern medicine, as well as present significant challenges. Deep learning is uniquely suited to address these challenges, and recent advances in techniques and hardware have poised the field of medical machine learning for transformational growth. The clinical neurosciences are particularly well positioned to benefit from these advances given the subtle presentation of symptoms typical of neurologic disease. Here we review the various domains in which deep learning algorithms have already provided impetus for change-areas such as medical image analysis for the improved diagnosis of Alzheimer's disease and the early detection of acute neurologic events; medical image segmentation for quantitative evaluation of neuroanatomy and vasculature; connectome mapping for the diagnosis of Alzheimer's, autism spectrum disorder, and attention deficit hyperactivity disorder; and mining of microscopic electroencephalogram signals and granular genetic signatures. We additionally note important challenges in the integration of deep learning tools in the clinical setting and discuss the barriers to tackling the challenges that currently exist.
PMCID:6858915
PMID: 31435868
ISSN: 2193-8253
CID: 4491522

Overlapping Surgeries and Surgical Prudence

Oermann, Eric Karl; Gologorsky, Yakov
PMID: 31132482
ISSN: 1878-8769
CID: 4491472

Time on Therapy for at Least Three Months Correlates with Overall Survival in Metastatic Renal Cell Carcinoma

Chen, Viola J; Hernandez-Meza, Gabriela; Agrawal, Prashasti; Zhang, Chiyuan A; Xie, Lijia; Gong, Cynthia L; Hoerner, Christian R; Srinivas, Sandy; Oermann, Eric K; Fan, Alice C
With 15 drugs currently approved for the treatment of metastatic renal cell carcinoma (mRCC) and even more combination regimens with immunotherapy on the horizon, there remains a distinct lack of molecular biomarkers for therapeutic efficacy. Our study reports on real-world clinical outcomes of mRCC patients from a tertiary academic medical center treated with empirically selected standard-of-care therapy. We utilized the Stanford Renal Cell Carcinoma Database (RCCD) to report on various outcome measures, including overall survival (OS) and the median number of lines of targeted therapies received from the time of metastatic diagnosis. We found that most metastatic patients did not survive long enough to attempt even half of the available targeted therapies. We also noted that patients who failed to receive a clinical benefit within the first two lines of therapy could still go on to experience clinical benefit in later lines of therapy. The term, "clinical benefit" was assigned to a line of therapy if a patient remained on drug treatment for three months or longer. Moreover, patients with clinical benefit in at least one line of therapy experienced significantly longer OS compared to those who did not have clinical benefit in at least one line of therapy. Developing biomarkers that identify patients who will receive clinical benefit in individual lines of therapy is one potential strategy for achieving rational drug sequencing in mRCC.
PMCID:6678132
PMID: 31319594
ISSN: 2072-6694
CID: 4491502

Machine learning for semi-automated classification of glioblastoma, brain metastasis and central nervous system lymphoma using magnetic resonance advanced imaging

Swinburne, Nathaniel C; Schefflein, Javin; Sakai, Yu; Oermann, Eric Karl; Titano, Joseph J; Chen, Iris; Tadayon, Sayedhedayatollah; Aggarwal, Amit; Doshi, Amish; Nael, Kambiz
Background/UNASSIGNED:Differentiating glioblastoma, brain metastasis, and central nervous system lymphoma (CNSL) on conventional magnetic resonance imaging (MRI) can present a diagnostic dilemma due to the potential for overlapping imaging features. We investigate whether machine learning evaluation of multimodal MRI can reliably differentiate these entities. Methods/UNASSIGNED:Preoperative brain MRI including diffusion weighted imaging (DWI), dynamic contrast enhanced (DCE), and dynamic susceptibility contrast (DSC) perfusion in patients with glioblastoma, lymphoma, or metastasis were retrospectively reviewed. Perfusion maps (rCBV, rCBF), permeability maps (K-trans, Kep, Vp, Ve), ADC, T1C+ and T2/FLAIR images were coregistered and two separate volumes of interest (VOIs) were obtained from the enhancing tumor and non-enhancing T2 hyperintense (NET2) regions. The tumor volumes obtained from these VOIs were utilized for supervised training of support vector classifier (SVC) and multilayer perceptron (MLP) models. Validation of the trained models was performed on unlabeled cases using the leave-one-subject-out method. Head-to-head and multiclass models were created. Accuracies of the multiclass models were compared against two human interpreters reviewing conventional and diffusion-weighted MR images. Results/UNASSIGNED:Twenty-six patients enrolled with histopathologically-proven glioblastoma (n=9), metastasis (n=9), and CNS lymphoma (n=8) were included. The trained multiclass ML models discriminated the three pathologic classes with a maximum accuracy of 69.2% accuracy (18 out of 26; kappa 0.540, P=0.01) using an MLP trained with the VpNET2 tumor volumes. Human readers achieved 65.4% (17 out of 26) and 80.8% (21 out of 26) accuracies, respectively. Using the MLP VpNET2 model as a computer-aided diagnosis (CADx) for cases in which the human reviewers disagreed with each other on the diagnosis resulted in correct diagnoses in 5 (19.2%) additional cases. Conclusions/UNASSIGNED:Our trained multiclass MLP using VpNET2 can differentiate glioblastoma, brain metastasis, and CNS lymphoma with modest diagnostic accuracy and provides approximately 19% increase in diagnostic yield when added to routine human interpretation.
PMCID:6603356
PMID: 31317002
ISSN: 2305-5839
CID: 4491482

Detecting insertion, substitution, and deletion errors in radiology reports using neural sequence-to-sequence models

Zech, John; Forde, Jessica; Titano, Joseph J; Kaji, Deepak; Costa, Anthony; Oermann, Eric Karl
Background/UNASSIGNED:Errors in grammar, spelling, and usage in radiology reports are common. To automatically detect inappropriate insertions, deletions, and substitutions of words in radiology reports, we proposed using a neural sequence-to-sequence (seq2seq) model. Methods/UNASSIGNED:Head CT and chest radiograph reports from Mount Sinai Hospital (MSH) (n=61,722 and 818,978, respectively), Mount Sinai Queens (MSQ) (n=30,145 and 194,309, respectively) and MIMIC-III (n=32,259 and 54,685) were converted into sentences. Insertions, substitutions, and deletions of words were randomly introduced. Seq2seq models were trained using corrupted sentences as input to predict original uncorrupted sentences. Three models were trained using head CTs from MSH, chest radiographs from MSH, and head CTs from all three collections. Model performance was assessed across different sites and modalities. A sample of original, uncorrupted sentences were manually reviewed for any error in syntax, usage, or spelling to estimate real-world proofreading performance of the algorithm. Results/UNASSIGNED:Seq2seq detected 90.3% and 88.2% of corrupted sentences with 97.7% and 98.8% specificity in same-site, same-modality test sets for head CTs and chest radiographs, respectively. Manual review of original, uncorrupted same-site same-modality head CT sentences demonstrated seq2seq positive predictive value (PPV) 0.393 (157/400; 95% CI, 0.346-0.441) and negative predictive value (NPV) 0.986 (789/800; 95% CI, 0.976-0.992) for detecting sentences containing real-world errors, with estimated sensitivity of 0.389 (95% CI, 0.267-0.542) and specificity 0.986 (95% CI, 0.985-0.987) over n=86,211 uncorrupted training examples. Conclusions/UNASSIGNED:Seq2seq models can be highly effective at detecting erroneous insertions, deletions, and substitutions of words in radiology reports. To achieve high performance, these models require site- and modality-specific training examples. Incorporating additional targeted training data could further improve performance in detecting real-world errors in reports.
PMCID:6603352
PMID: 31317003
ISSN: 2305-5839
CID: 4491492

Artificial Intelligence in Clinical Neurosciences

Oermann, Eric Karl; Gologorsky, Yakov
PMID: 31546319
ISSN: 1878-8769
CID: 4491532

CANDI: an R package and Shiny app for annotating radiographs and evaluating computer-aided diagnosis

Badgeley, Marcus A; Liu, Manway; Glicksberg, Benjamin S; Shervey, Mark; Zech, John; Shameer, Khader; Lehar, Joseph; Oermann, Eric K; McConnell, Michael V; Snyder, Thomas M; Dudley, Joel T
MOTIVATION:Radiologists have used algorithms for Computer-Aided Diagnosis (CAD) for decades. These algorithms use machine learning with engineered features, and there have been mixed findings on whether they improve radiologists' interpretations. Deep learning offers superior performance but requires more training data and has not been evaluated in joint algorithm-radiologist decision systems. RESULTS:We developed the Computer-Aided Note and Diagnosis Interface (CANDI) for collaboratively annotating radiographs and evaluating how algorithms alter human interpretation. The annotation app collects classification, segmentation, and image captioning training data, and the evaluation app randomizes the availability of CAD tools to facilitate clinical trials on radiologist enhancement. AVAILABILITY AND IMPLEMENTATION:Demonstrations and source code are hosted at (https://candi.nextgenhealthcare.org), and (https://github.com/mbadge/candi), respectively, under GPL-3 license. SUPPLEMENTARY INFORMATION:Supplementary material is available at Bioinformatics online.
PMCID:6499410
PMID: 30304439
ISSN: 1367-4811
CID: 4491422

Revised Cardiac Risk Index as a Predictor for Myocardial Infarction and Cardiac Arrest Following Posterior Lumbar Decompression

Bronheim, Rachel S; Oermann, Eric K; Bronheim, David S; Caridi, John M
STUDY DESIGN/METHODS:A retrospective analysis of prospectively collected data. OBJECTIVE:The aim of this study was to determine the ability of Revised Cardiac Risk Index (RCRI) to predict adverse cardiac events following posterior lumbar decompression (PLD). SUMMARY OF BACKGROUND DATA/BACKGROUND:PLD is an increasingly common procedure used to treat a variety of degenerative spinal conditions. The RCRI is used to predict risk for cardiac events following noncardiac surgery. There is a paucity of literature that directly addresses the relationship between RCRI and outcomes following PLD, specifically, the discriminative ability of the RCRI to predict adverse postoperative cardiac events. METHODS:ACS-NSQIP was utilized to identify patients undergoing PLD from 2006 to 2014. Fifty-two thousand sixty-six patients met inclusion criteria. Multivariate and ROC analysis was utilized to identify associations between RCRI and postoperative complications. RESULTS:Membership in the RCRI=1 cohort was a predictor for myocardial infarction (MI) [odds ratio (OR) = 3.3, P = 0.002] and cardiac arrest requiring cardiopulmonary resuscitation (CPR) (OR = 3.4, P = 0.013). Membership in the RCRI = 2 cohort was a predictor for MI (OR = 5.9, P = 0.001) and cardiac arrest requiring CPR (OR = 12.5), Membership in the RCRI = 3 cohort was a predictor for MI (OR = 24.9) and cardiac arrest requiring CPR (OR = 26.9, P = 0.006). RCRI had a good discriminative ability to predict both MI [area under the curve (AUC) = 0.876] and cardiac arrest requiring CPR (AUC = 0.855). The RCRI had a better discriminative ability to predict these outcomes that did ASA status, which had discriminative abilities of "fair" (AUC = 0.799) and "poor" (AUC = 0.674), respectively. P < 0.001 unless otherwise specified. CONCLUSION/CONCLUSIONS:RCRI was predictive of cardiac events following PLD, and RCRI had a better discriminative ability to predict MI and cardiac arrest requiring CPR than did ASA status. Consideration of the RCRI as a component of preoperative surgical risk stratification can minimize patient morbidity and mortality. Studies such as this can allow for implementation of guidelines that better estimate the preoperative risk profile of surgical patients. LEVEL OF EVIDENCE/METHODS:3.
PMID: 30005044
ISSN: 1528-1159
CID: 4491392

An attention based deep learning model of clinical events in the intensive care unit

Kaji, Deepak A; Zech, John R; Kim, Jun S; Cho, Samuel K; Dangayach, Neha S; Costa, Anthony B; Oermann, Eric K
This study trained long short-term memory (LSTM) recurrent neural networks (RNNs) incorporating an attention mechanism to predict daily sepsis, myocardial infarction (MI), and vancomycin antibiotic administration over two week patient ICU courses in the MIMIC-III dataset. These models achieved next-day predictive AUC of 0.876 for sepsis, 0.823 for MI, and 0.833 for vancomycin administration. Attention maps built from these models highlighted those times when input variables most influenced predictions and could provide a degree of interpretability to clinicians. These models appeared to attend to variables that were proxies for clinician decision-making, demonstrating a challenge of using flexible deep learning approaches trained with EHR data to build clinical decision support. While continued development and refinement is needed, we believe that such models could one day prove useful in reducing information overload for ICU physicians by providing needed clinical decision support for a variety of clinically important tasks.
PMCID:6373907
PMID: 30759094
ISSN: 1932-6203
CID: 4491462