Try a new search

Format these results:

Searched for:

person:oermae01

in-biosketch:true

Total Results:

101


Intraoperative brain tumour identification with deep learning [Comment]

Martini, Michael L; Oermann, Eric K
PMID: 32099093
ISSN: 1759-4782
CID: 4491552

Sensor Modalities for Brain-Computer Interface Technology: A Comprehensive Literature Review

Martini, Michael L; Oermann, Eric Karl; Opie, Nicholas L; Panov, Fedor; Oxley, Thomas; Yaeger, Kurt
Brain-computer interface (BCI) technology is rapidly developing and changing the paradigm of neurorestoration by linking cortical activity with control of an external effector to provide patients with tangible improvements in their ability to interact with the environment. The sensor component of a BCI circuit dictates the resolution of brain pattern recognition and therefore plays an integral role in the technology. Several sensor modalities are currently in use for BCI applications and are broadly either electrode-based or functional neuroimaging-based. Sensors vary in their inherent spatial and temporal resolutions, as well as in practical aspects such as invasiveness, portability, and maintenance. Hybrid BCI systems with multimodal sensory inputs represent a promising development in the field allowing for complimentary function. Artificial intelligence and deep learning algorithms have been applied to BCI systems to achieve faster and more accurate classifications of sensory input and improve user performance in various tasks. Neurofeedback is an important advancement in the field that has been implemented in several types of BCI systems by showing users a real-time display of their recorded brain activity during a task to facilitate their control over their own cortical activity. In this way, neurofeedback has improved BCI classification and enhanced user control over BCI output. Taken together, BCI systems have progressed significantly in recent years in terms of accuracy, speed, and communication. Understanding the sensory components of a BCI is essential for neurosurgeons and clinicians as they help advance this technology in the clinical setting.
PMID: 31361011
ISSN: 1524-4040
CID: 4491512

Big Data Defined: A Practical Review for Neurosurgeons

Bydon, Mohamad; Schirmer, Clemens M; Oermann, Eric K; Kitagawa, Ryan S; Pouratian, Nader; Davies, Jason; Sharan, Ashwini; Chambless, Lola B
BACKGROUND:Modern science and healthcare generate vast amounts of data, and, coupled with the increasingly inexpensive and accessible computing, a tremendous opportunity exists to use these data to improve care. A better understanding of data science and its relationship to neurosurgical practice will be increasingly important as we transition into this modern "big data" era. METHODS:A review of the literature was performed for key articles referencing big data for neurosurgical care or related topics. RESULTS:In the present report, we first defined the nature and scope of data science from a technical perspective. We then discussed its relationship to the modern neurosurgical practice, highlighting key references, which might form a useful introductory reading list. CONCLUSIONS:Numerous challenges exist going forward; however, organized neurosurgery has an important role in fostering and facilitating these efforts to merge data science with neurosurgical practice.
PMID: 31562965
ISSN: 1878-8769
CID: 4491542

Deep Learning and Neurology: A Systematic Review

Valliani, Aly Al-Amyn; Ranti, Daniel; Oermann, Eric Karl
Deciphering the massive volume of complex electronic data that has been compiled by hospital systems over the past decades has the potential to revolutionize modern medicine, as well as present significant challenges. Deep learning is uniquely suited to address these challenges, and recent advances in techniques and hardware have poised the field of medical machine learning for transformational growth. The clinical neurosciences are particularly well positioned to benefit from these advances given the subtle presentation of symptoms typical of neurologic disease. Here we review the various domains in which deep learning algorithms have already provided impetus for change-areas such as medical image analysis for the improved diagnosis of Alzheimer's disease and the early detection of acute neurologic events; medical image segmentation for quantitative evaluation of neuroanatomy and vasculature; connectome mapping for the diagnosis of Alzheimer's, autism spectrum disorder, and attention deficit hyperactivity disorder; and mining of microscopic electroencephalogram signals and granular genetic signatures. We additionally note important challenges in the integration of deep learning tools in the clinical setting and discuss the barriers to tackling the challenges that currently exist.
PMCID:6858915
PMID: 31435868
ISSN: 2193-8253
CID: 4491522

Overlapping Surgeries and Surgical Prudence

Oermann, Eric Karl; Gologorsky, Yakov
PMID: 31132482
ISSN: 1878-8769
CID: 4491472

Time on Therapy for at Least Three Months Correlates with Overall Survival in Metastatic Renal Cell Carcinoma

Chen, Viola J; Hernandez-Meza, Gabriela; Agrawal, Prashasti; Zhang, Chiyuan A; Xie, Lijia; Gong, Cynthia L; Hoerner, Christian R; Srinivas, Sandy; Oermann, Eric K; Fan, Alice C
With 15 drugs currently approved for the treatment of metastatic renal cell carcinoma (mRCC) and even more combination regimens with immunotherapy on the horizon, there remains a distinct lack of molecular biomarkers for therapeutic efficacy. Our study reports on real-world clinical outcomes of mRCC patients from a tertiary academic medical center treated with empirically selected standard-of-care therapy. We utilized the Stanford Renal Cell Carcinoma Database (RCCD) to report on various outcome measures, including overall survival (OS) and the median number of lines of targeted therapies received from the time of metastatic diagnosis. We found that most metastatic patients did not survive long enough to attempt even half of the available targeted therapies. We also noted that patients who failed to receive a clinical benefit within the first two lines of therapy could still go on to experience clinical benefit in later lines of therapy. The term, "clinical benefit" was assigned to a line of therapy if a patient remained on drug treatment for three months or longer. Moreover, patients with clinical benefit in at least one line of therapy experienced significantly longer OS compared to those who did not have clinical benefit in at least one line of therapy. Developing biomarkers that identify patients who will receive clinical benefit in individual lines of therapy is one potential strategy for achieving rational drug sequencing in mRCC.
PMCID:6678132
PMID: 31319594
ISSN: 2072-6694
CID: 4491502

Machine learning for semi-automated classification of glioblastoma, brain metastasis and central nervous system lymphoma using magnetic resonance advanced imaging

Swinburne, Nathaniel C; Schefflein, Javin; Sakai, Yu; Oermann, Eric Karl; Titano, Joseph J; Chen, Iris; Tadayon, Sayedhedayatollah; Aggarwal, Amit; Doshi, Amish; Nael, Kambiz
Background/UNASSIGNED:Differentiating glioblastoma, brain metastasis, and central nervous system lymphoma (CNSL) on conventional magnetic resonance imaging (MRI) can present a diagnostic dilemma due to the potential for overlapping imaging features. We investigate whether machine learning evaluation of multimodal MRI can reliably differentiate these entities. Methods/UNASSIGNED:Preoperative brain MRI including diffusion weighted imaging (DWI), dynamic contrast enhanced (DCE), and dynamic susceptibility contrast (DSC) perfusion in patients with glioblastoma, lymphoma, or metastasis were retrospectively reviewed. Perfusion maps (rCBV, rCBF), permeability maps (K-trans, Kep, Vp, Ve), ADC, T1C+ and T2/FLAIR images were coregistered and two separate volumes of interest (VOIs) were obtained from the enhancing tumor and non-enhancing T2 hyperintense (NET2) regions. The tumor volumes obtained from these VOIs were utilized for supervised training of support vector classifier (SVC) and multilayer perceptron (MLP) models. Validation of the trained models was performed on unlabeled cases using the leave-one-subject-out method. Head-to-head and multiclass models were created. Accuracies of the multiclass models were compared against two human interpreters reviewing conventional and diffusion-weighted MR images. Results/UNASSIGNED:Twenty-six patients enrolled with histopathologically-proven glioblastoma (n=9), metastasis (n=9), and CNS lymphoma (n=8) were included. The trained multiclass ML models discriminated the three pathologic classes with a maximum accuracy of 69.2% accuracy (18 out of 26; kappa 0.540, P=0.01) using an MLP trained with the VpNET2 tumor volumes. Human readers achieved 65.4% (17 out of 26) and 80.8% (21 out of 26) accuracies, respectively. Using the MLP VpNET2 model as a computer-aided diagnosis (CADx) for cases in which the human reviewers disagreed with each other on the diagnosis resulted in correct diagnoses in 5 (19.2%) additional cases. Conclusions/UNASSIGNED:Our trained multiclass MLP using VpNET2 can differentiate glioblastoma, brain metastasis, and CNS lymphoma with modest diagnostic accuracy and provides approximately 19% increase in diagnostic yield when added to routine human interpretation.
PMCID:6603356
PMID: 31317002
ISSN: 2305-5839
CID: 4491482

Detecting insertion, substitution, and deletion errors in radiology reports using neural sequence-to-sequence models

Zech, John; Forde, Jessica; Titano, Joseph J; Kaji, Deepak; Costa, Anthony; Oermann, Eric Karl
Background/UNASSIGNED:Errors in grammar, spelling, and usage in radiology reports are common. To automatically detect inappropriate insertions, deletions, and substitutions of words in radiology reports, we proposed using a neural sequence-to-sequence (seq2seq) model. Methods/UNASSIGNED:Head CT and chest radiograph reports from Mount Sinai Hospital (MSH) (n=61,722 and 818,978, respectively), Mount Sinai Queens (MSQ) (n=30,145 and 194,309, respectively) and MIMIC-III (n=32,259 and 54,685) were converted into sentences. Insertions, substitutions, and deletions of words were randomly introduced. Seq2seq models were trained using corrupted sentences as input to predict original uncorrupted sentences. Three models were trained using head CTs from MSH, chest radiographs from MSH, and head CTs from all three collections. Model performance was assessed across different sites and modalities. A sample of original, uncorrupted sentences were manually reviewed for any error in syntax, usage, or spelling to estimate real-world proofreading performance of the algorithm. Results/UNASSIGNED:Seq2seq detected 90.3% and 88.2% of corrupted sentences with 97.7% and 98.8% specificity in same-site, same-modality test sets for head CTs and chest radiographs, respectively. Manual review of original, uncorrupted same-site same-modality head CT sentences demonstrated seq2seq positive predictive value (PPV) 0.393 (157/400; 95% CI, 0.346-0.441) and negative predictive value (NPV) 0.986 (789/800; 95% CI, 0.976-0.992) for detecting sentences containing real-world errors, with estimated sensitivity of 0.389 (95% CI, 0.267-0.542) and specificity 0.986 (95% CI, 0.985-0.987) over n=86,211 uncorrupted training examples. Conclusions/UNASSIGNED:Seq2seq models can be highly effective at detecting erroneous insertions, deletions, and substitutions of words in radiology reports. To achieve high performance, these models require site- and modality-specific training examples. Incorporating additional targeted training data could further improve performance in detecting real-world errors in reports.
PMCID:6603352
PMID: 31317003
ISSN: 2305-5839
CID: 4491492

Artificial Intelligence in Clinical Neurosciences

Oermann, Eric Karl; Gologorsky, Yakov
PMID: 31546319
ISSN: 1878-8769
CID: 4491532

CANDI: an R package and Shiny app for annotating radiographs and evaluating computer-aided diagnosis

Badgeley, Marcus A; Liu, Manway; Glicksberg, Benjamin S; Shervey, Mark; Zech, John; Shameer, Khader; Lehar, Joseph; Oermann, Eric K; McConnell, Michael V; Snyder, Thomas M; Dudley, Joel T
MOTIVATION:Radiologists have used algorithms for Computer-Aided Diagnosis (CAD) for decades. These algorithms use machine learning with engineered features, and there have been mixed findings on whether they improve radiologists' interpretations. Deep learning offers superior performance but requires more training data and has not been evaluated in joint algorithm-radiologist decision systems. RESULTS:We developed the Computer-Aided Note and Diagnosis Interface (CANDI) for collaboratively annotating radiographs and evaluating how algorithms alter human interpretation. The annotation app collects classification, segmentation, and image captioning training data, and the evaluation app randomizes the availability of CAD tools to facilitate clinical trials on radiologist enhancement. AVAILABILITY AND IMPLEMENTATION:Demonstrations and source code are hosted at (https://candi.nextgenhealthcare.org), and (https://github.com/mbadge/candi), respectively, under GPL-3 license. SUPPLEMENTARY INFORMATION:Supplementary material is available at Bioinformatics online.
PMCID:6499410
PMID: 30304439
ISSN: 1367-4811
CID: 4491422