Searched for: in-biosketch:yes
person:wollsc01
Identifying OCT Parameters to Predict Glaucoma Visual Field Progression [Meeting Abstract]
Cobbs, Lucy; Ramos-Cadena, Maria de los Angeles; Wu, Mengfei; Liu, Mengling; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S.
ISI:000554495704047
ISSN: 0146-0404
CID: 5524302
Using deep learning methods to develop a novel predictive glaucoma progression model [Meeting Abstract]
Lin, A; Fenyo, D; Schuman, J S; Wollstein, G; Ishikawa, H
Purpose : To develop a novel glaucoma progression model with deep learning methods incorporating four major glaucoma biomarkers: VFI, MD, cRNFL and GCIPL. Methods : 1023 eyes from 596 glaucoma/glaucoma-suspect patients were included from clinic. Two types of deep learning (DL) models were developed using Keras: an artificial neural network (ANN) and a recurrent neural network (RNN). The ANN contained five fullyconnected (FC) layers, with a leaky rectified linear unit activation function and a dropout layer with a rate of 0.2. The RNN contained two long short-term memory layers, followed by a FC layer and a dropout layer with a rate of 0.2. Both models were trained to predict four major clinical biomarkers for glaucoma: visual field index (VFI), mean deviation (MD), circumpapillary retinal nerve fiber layer (cRNFL) thickness, and ganglion cell inner plexiform layer (GCIPL) thickness. The models were trained using the first three visits to predict the fourth one year later. Train/validation/test splits were 65/15/20. A linear regression (LR) model was trained and evaluated on the same data for baseline comparison. Evaluation of the actual and predicted values were measured by mean absolute error (MAE). Statistical testing of each biomarker was performed between the DL models and LR model by paired Wilcoxon rank sum test. Results : The mean patient age was 62.4 +/- 12.9 years. The baseline mean cRNFL: 76.9 +/- 13.4 mum, GCIPL: 70.3 +/- 9.9 mum, VFI: 90.3 +/- 17.8%, and MD: -3.76 +/- 6.13 dB. Table shows the MAE between the actual and predicted values of each of the four biomarkers across all three models. The ANN and RNN models showed statistically significantly smaller MAE compared to the LR model. In particular, the ANN model had the lowest MAE and was able to predict all four biomarkers significantly better than the LR model. Conclusions : By harnessing the power of deep learning, we were able to accurately predict future values of both structural and functional measures of glaucomatous change one year later. This is possible as neural networks are able to recognize the intricate interplay between structural and functional changes in glaucoma that otherwise cannot be well captured in a conventional linear regression model
EMBASE:632698568
ISSN: 1552-5783
CID: 4584792
Predicting macular progression map using deep learning [Meeting Abstract]
Chen, Z; Wang, Y; De, Los Angeles Ramos-Cadena M; Wollstein, G; Schuman, J S; Ishikawa, H
Purpose : Optical coherence tomography (OCT) two dimensional (2D) ganglion cell inner plexiform layer (GCIPL) thickness maps often reveal subtle abnormalities that might be washed out with summarized parameters (global or sectoral measurements). Also, the spatial pattern of GCIPL shows useful information to understand the extent and magnitude of localized damages. The purpose of this study was to predict next-visit 2D GCIPL thickness map based on the current and past GCIPL thickness maps. Methods : 346 glaucomatous eyes (191 subjects) with at least 5 visits with OCT tests were included in the study. GCIPL thickness maps were obtained using a clinical OCT (Cirrus HD-OCT, Zeiss, Dublin, CA; software version 9.5.1.13585; 200x200 macular cube scan). Since 83.2% of subjects were stable (average GCIPL change < 2um per year), we simulated progressing cases for diffuse damage pattern and hemifield damage pattern (superior vs. inferior hemifield damage was 50:50) (Figure 1 (c) and (d)). A deep learning based method, time-aware convolutional long short-term memory (TC-LSTM), was developed to handle irregular time intervals of longitudinal GCIPL thickness maps and predict the 5th GCIPL thickness map from the past 4 tests. The TC-LSTM model was compared with a conventional linear regression (LR) analysis. Mean square error (MSE, normalized to pixel intensity) and peak signal to noise ratio (PSNR) between predicted maps and ground truth maps were used to quantify the prediction quality (lower MSE and higher PSNR indicate better results). The Wilcoxon signed-rank test was used to compare TC-LSTM results and LR results. Results : TC-LSTM achieved lower MSE and higher PSNR compared to the LR model (MSE 0.00049 vs. 0.00061, p<0.001, and PSNR 34.45 vs. 32.52 dB, p=0.035). Subjective evaluation by 3 expert ophthalmologists showed that TC-LSTM model had closer representations of the ground truth maps than the LR model (Table 1, Figure 1). Conclusions : The next visit GCIPL thickness maps were successfully generated using TC-LSTM with higher accuracy compared to LR model both quantitatively and subjectively
EMBASE:632694547
ISSN: 1552-5783
CID: 4586172
Deep learning network for Glaucoma detection at 40 million voxels [Meeting Abstract]
Antony, B J; Ishikawa, H; Wollstein, G; Schuman, J S; Garnavi, R
Purpose : Current GPU memory limitations do not support the analysis of OCT scans at its original resolution, and previous techniques have downsampled the inputs considerably which resulted in a loss of detail. Here, we utilise a new memory management support framework that allows for the training of large deep learning networks and apply it to the detection of glaucoma in OCT scans at its original resolution. Methods : A total of 1110 SDOCT volumes (Cirrus, Zeiss, CA) were acquired from both eyes of 624 subjects (139 healthy and 485 glaucomatous patients (POAG)). A convolutional neural network (CNN) consisting of 8 3D-convolutional layers with a total of 600K parameters and was trained using a cross-entropy loss to differentiate between the healthy and glaucomatous scans. To avoid GPU memory constraints, the network was trained using a large model support library that automatically adds swap-in and swap-out nodes for transferring tensors from GPUs to the host and vice versa. This allowed for the OCT scans to be analysed at the original resolution of 200x200x1024. The performance of the network was gauged by computing the area under the receiver operating characteristic (AUC) curve. The performance of this network was also compared to a previously proposed network that ingested downsampled OCT scans (50x50x128), consisted of 5 3D-convolutional layers and had a total of 222K parameters; and a machine-learning technique (random forests) that relied on segmented features (peripapillary nerve fibre thicknesses). Class activation maps (CAM) were also generated for each of these networks to provide a qualitative view of the regions that the network deemed as important and relevant to the task. Results : The AUCs computed on the test set for the networks that analysed the volumes at the original and downsampled resolutions was found to be 0.92 and 0.91, respectively. The CAMs obtained using the high resolution images show more detail in comparison to the downsampled volume. The random forest technique showed an AUC of 0.85. Conclusions : The performance of the two networks was comparable for glaucoma detection but showed a vast improvement over the random forest that relied on segmented features. The ability to retain detail (as shown in the CAM) will likely allow for improvements in other tasks, such as spatial correspondences between visual field test locations and retinal structure
EMBASE:632694500
ISSN: 1552-5783
CID: 4586182
Early changes in basal cerebral blood flow and GABAergic activity in the visual cortex of glaucoma patients [Meeting Abstract]
Chen, A M; Bang, J W; Parra, C; Wollstein, G; Schuman, J S; Chan, K C
Purpose : Recent studies have indicated reduced blood flow in not only the eye but also the brain in patients with late glaucoma (LG). In contrast, patients with early glaucoma (EG) appear to show increased ocular blood flow, but little is known about their corresponding brain changes and their specific pathology. This study utilized non-invasive functional and molecular imaging biomarkers to determine cerebral blood flow (CBF) and neurochemical changes in the visual cortex of EG and LG patients. Methods : Four EG (age=67.00+/-5.26 years; 2F), 6 LG (age=65.33+/-2.75 years; 1F), and 5 healthy controls (age=63.00+/-3.11 years; 1F) underwent pseudo-continuous arterial spin labeling (pCASL) functional MRI and MEGA-PRESS magnetic resonance spectroscopy (MRS) at rest using a 3-Tesla MRI scanner. Basal CBF was measured from pCASL in the visual and motor cortices (Figure 1a). For MRS, the level of gamma-aminobutyric acid (GABA) in the visual cortex was quantified through the LCModel software (Figure 2a), and normalized over the N-acetyl aspartate and N-acetyl aspartyl glutamic acid complex (NAA+NAAG) to account for systematic fluctuations following LCModel guidelines. Results : Basal CBF in the white matter (WM) of the visual cortex was significantly higher for EG compared to LG (p=0.021) and controls (p=0.045), whereas basal CBF in the gray matter (GM) of the visual cortex was significantly higher for EG compared to LG (p=0.042) (Figure 1b). No apparent CBF difference was found within the motor cortex across groups (p>0.05). For MRS, normalized GABA levels appeared lower in EG than in controls (p=0.021), while LG had a trending decease compared to controls (p=0.092) (Figure 2b). Within the glaucoma groups, we also found a negative association between basal CBF and normalized GABA levels in both WM (p=0.038) and GM (p=0.039) (Figures 2c-d). Conclusions : The elevated basal CBF and lower baseline GABA levels in the visual cortex of EG suggest that vascular autoregulation dysfunction and/or neurochemical adaptation may be occurring in the brain's visual system apart from the eye during the initial phases of glaucoma pathogenesis. Within glaucoma groups, the inverse correlations demonstrated between basal CBF and baseline GABA levels may also offer a quantitative framework for interrogating inhibitory GABAergic activity and hemodynamic reactivity relationships in the glaucomatous brain during disease progression
EMBASE:632697937
ISSN: 1552-5783
CID: 4584822
Estimating visual field progression rates of glaucoma patients using estimates derived from OCT scans [Meeting Abstract]
Yu, H -H; Antony, B J; Ishikawa, H; Wollstein, G; Schuman, J S; Garnavi, R
Purpose : To develop a method for monitoring the functional deterioration of glaucoma patents using structural surrogates, we used machine learning algorithms to estimate visual field index (VFI) from OCT scans, and evaluated the accuracy of the progression rates calculated from the estimated VFI. Methods : Macular and ONH SDOCT scans (Cirrus HD-OCT, Zeiss, Dublin, CA; 200x200x1024 samplings over 6x6x2mm, downsampled to 64x64x128 voxels) were acquired from both eyes of 1,678 healthy participants, glaucoma suspects, and glaucoma patients over multiple visits (range: 1-14, median=3), forming a dataset of 10,172 pairs of macular+ONH scans. Automated perimetry (Humphrey visual field, SITA 24-2) tests were administered at each visit. Two models were trained to estimate the measured VFI from a pair of macular and ONH scans: the first ("classic model") was a non-linear regression model (multi-layer perceptron) based on 47 thickness measures of retinal layers, while the other ("CNN") was a 5-layer convolutional neural network, trained to learn 3D features in the OCT scans. For both models, MSE was minimized in 5-fold cross-validation, using 80%:10%:10% of the dataset as training, validation and test sets. Data from the same participant were not split across the three sets. For data in the test sets, VFI's for eyes with more than N=3,4,5 visits were estimated for individual visits, and the slopes were calculated using linear regression across N consecutive visits. Median absolute error (MAE) was used to quantify estimation accuracy. Results : For estimating VFI at single visits, the CNN achieved significant lower MAE (2.6+/-0.28; mean and s.d.) than the classic model (2.9+/-0.45). For estimating slopes across 5 visits, the MAE of the CNN (0.73+/-0.12/year) was also lower than the classic model (0.82+/-0.23/year). The errors depended on the measured VFI of the first visit, and on the true slope (Fig. 1). Increasing the number of visits decreased the errors (N=3.6, MAE=1.38/yr, 0.99/yr, 0.73/yr, and 0.63yr) Conclusions : The feature-agnostic CNN was better at estimating VFI and visual field progression rates than the regression method based on thickness measures. Structure-tofunction estimation using neural networks is a promising method for monitoring the visual functions of glaucoma patients
EMBASE:632697926
ISSN: 1552-5783
CID: 4586052
Can clock hour OCT retinal nerve fiber layer (RNFL) thickness measurements outperform global mean RNFL for glaucoma diagnosis? [Meeting Abstract]
Wu, M; Liu, M; Schuman, J S; Ishikawa, H; Wollstein, G
Purpose : To compare the discrimination accuracy for glaucoma diagnosis using the OCT RNFL clock hours compared with average RNFL. Methods : In a large, ongoing, longitudinal cohort of healthy subjects and subjects with glaucoma, all subjects underwent visual field (VF) and OCT testing. Principal component (PC) analysis was used to reduce the dimensionality of clock hour measurements while maintaining maximum information variability for diagnostic performance. The first four PCs with linear regression were used as predictors of VF mean deviation (MD) and to classify glaucoma diagnosis. The prediction accuracy and discrimination power using cross validation were compared to the models using only average RNFL as a predictor. All models were adjusted for age, signal strength, and intra-subject correlation. Results : 1317 healthy and glaucomatous eyes (717 subjects) were included in the study. A PC analysis was built on the 9 clock hours while excluding non-informative sectors (clock hours 3, 4, and 9). The first PC explained 51% of the total variance, and the first four PCs explained 82% of the total variance and thus were used for subsequent regression models. A PC regression for glaucoma discrimination showed that clock hours 1, 5, 6, 7, 10, 11, 12 were significantly association with diagnosis. The PC showed better glaucoma diagnosis performance compared to average RNFL, with 10-fold cross-validation AUCs of 0.898 and 0.877, respectively (p<0.001). The PC regression for MD improved the model fit measured by R2 by 9% compared to a regression using average RNFL. PC showed that clock hours 2, 5, 6, 7, 10, 11, 12 were significantly associated with MD. Conclusions : Using PCs with RNFL clock hours improved classification performance for glaucoma diagnosis and model fit for MD, compared to using average RNFL. This method improves discrimination performance by both considering all sectoral RNFL information and removing locations with low diagnostic yield
EMBASE:632694154
ISSN: 1552-5783
CID: 4584932
Understanding deep learning decision for glaucoma detection using 3D volumes [Meeting Abstract]
George, Y M; Antony, B J; Ishikawa, H; Wollstein, G; Schuman, J S; Garnavi, R
Purpose : Gradient class activation maps (grad-CAM) generated by convolutional neural networks (CNN) have qualitatively indicated that these networks are able to identify important regions in OCT scans. Here, we quantitatively analyse these regions to improve our understanding of the CNN decision making process when detecting glaucoma in OCT volumes. Methods : A total of 1110 OCT (Cirrus HD-OCT, Zeiss, Dublin, Ca) scans from both eyes of 624 subjects (139 healthy and 485 glaucomatous patients (POAG)). An end-to-end 3D-CNN network was trained directly on 3D-volumes for glaucoma detection. Grad-CAM was implemented to highlight structures in the volumes that the network relied on. Grad-CAM heatmaps were generated for 3 different convolutional layers and quantitatively validated by occluding the regions with the highest grad-CAM weights (12.5% of original input volumes) and then evaluating the performance drop. Further, 8-retinal layers segmentation method was used to compute the average heatmap weights for each segmented layer separately, and used to identify the layers that were deemed as important for the task. Results : The model achieved an AUC of 0.97 for the test set (110 scans). Occlusion resulted in a 40% drop in performance (Fig.1). The RNFL and photoreceptors showed the highest median weights for grad-CAM heatmaps (0.1 and 0.2, respectively). The retinal pigment epithelium (RPE) and photoreceptors showed higher weights in the glaucomatous scans (Fig.2-a). RNFL had wider range of weights in healthy cases versus POAG ones. Analysis of the B-scans showed that central part around the optic disc (# 85-135) had the highest contribution to the network decision and the heatmap weights were much higher in glaucoma cases than healthy ones across all B-scans (Fig.2-b). Conclusions : The occlusion experiment indicates that the regions identified by the grad-CAMs are in fact pertinent to the glaucoma detection task. The increased emphasis on the photoreceptors in the glaucoma cases may be attributed to the atrophy in the superficial layers which in turn increased the brightness of this structure. This technique can be used to identify new biomarkers learned for other ocular diseases
EMBASE:632694999
ISSN: 1552-5783
CID: 4586162
Sensory integration abilities for balance in glaucoma [Meeting Abstract]
Cham, R; Redfern, M S; O'Connell, C; Conner, I P; Wollstein, G; Chan, K C
Purpose : Falls risk increases with glaucoma. The inability to see obstacles such as steps or stairs is one mechanism of falls. Another potential mechanism is reduced postural control. The impact of glaucoma on the ability to centrally integrate sensory information relevant for balance has not been systematically investigated. The goal of this study is to assess the influence of glaucoma severity on sensory integration abilities for balance. Methods : Eleven adults diagnosed with glaucoma were recruited. Glaucoma severity was determined using two measures: (1) a functional measure, specifically visual field mean deviation (MD) assessed by automated Humphrey perimetry and (2) a structural measure, specifically retinal nerve fiber layer (RNFL) thickness as measured by OCT. Standing balance was assessed using an adapted version of the Sensory Organization Test (SOT) that probes the ability to integrate visual, somatosensory and vestibular information for balance control (Nashner, 1997). The six SOT postural conditions were used, each lasting 3 min. Underfoot center of pressure was used to compute sway speed. Statistical analyses consisted of mixed linear models performed within each postural condition, with glaucoma severity as a fixed effect and subject as the random effect. The dependent measure was sway speed. Statistical significance was set at 0.05. Results : A worse visual field deficit, as reflected by MD, in the better eye was associated with increased sway speed in the first four SOT conditions (p<0.05), i.e. conditions involving altered or absent visual OR somatosensory information. This effect was not found in conditions when the postural control system relies solely on the vestibular system to maintain balance (SOT Conditions 5-6, p>0.2). Visual field deficits in the worse eye and structural damage in either eye, as reflected by RNFL thickness, were not associated with sway speed under any of the postural conditions. Conclusions : Balance is impacted by glaucoma under conditions where sensory integration is challenged. Interestingly, visual field severity and sway speed were associated even during the eyes closed condition. This may suggest a central sensory integration mechanism. Further research is warranted. Reference. Nashner, L. M. (1997). Computerized Dynamic Posturography. In G. P. Jacobson, et al. (Eds.), Handbook of balance function testing. San Diego, CA: Singular Publishing Group, Inc
EMBASE:632698500
ISSN: 1552-5783
CID: 4586032
Measurement reproducibility using vivid vision perimetry: A virtual reality-based mobile platform [Meeting Abstract]
Greenfield, J A; Deiner, M; Nguyen, A; Wollstein, G; Damato, B; Backus, B T; Wu, M; Schuman, J S; Ou, Y
Purpose : Vivid Vision Perimetry (VVP) is a novel method for performing in-office and home-based visual field assessment using a virtual reality platform and oculokinetic perimetry. The purpose of this study was to examine the test-retest reproducibility of the VVP platform. Methods : Subjects with open-angle glaucoma and glaucoma suspects were prospectively enrolled and underwent visual field analysis across 54 test locations in a 24-2 pattern using the VVP device (Vivid Vision, San Francisco, CA). Each subject was examined in 2 sessions, and the mean sensitivity (dB) was the primary outcome measure obtained for each eye in both sessions. The repeatability of mean sensitivity was assessed through analysis of bias from the differences between the two VVP sessions. A Bland-Altman plot using a mixed effects model (adjusting for average sensitivity and eye correlation) was created to illustrate the level of agreement between repeated measurements. Results : Fourteen eyes of 7 open-angle glaucoma patients and 10 eyes of 5 glaucoma suspects were enrolled (mean age 62.3 +/-9.3 years, 33% female). Based on the data from 24 eyes, the average difference of VVP mean sensitivity between the two sessions was found to be 0.48 dB. Three eyes (12.5%) fell outside the upper and lower limits of agreement (95% CI:-1.15, 2.11). The level of agreement between repeated VVP measurements showed a general trend of increasing precision as mean sensitivity values increased (Figure 1). Conclusions : The VVP platform provides reproducible visual field sensitivity measurements for glaucoma patients and glaucoma suspects and represents a novel approach for glaucoma monitoring. These data suggest that VVP measurement repeatability is consistent with standard automated perimetry
EMBASE:632694334
ISSN: 1552-5783
CID: 4586202