Try a new search

Format these results:

Searched for:

person:kjg5

in-biosketch:yes

Total Results:

34


Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams

Shen, Yiqiu; Shamout, Farah E; Oliver, Jamie R; Witowski, Jan; Kannan, Kawshik; Park, Jungkyu; Wu, Nan; Huddleston, Connor; Wolfson, Stacey; Millet, Alexandra; Ehrenpreis, Robin; Awal, Divya; Tyma, Cathy; Samreen, Naziya; Gao, Yiming; Chhor, Chloe; Gandhi, Stacey; Lee, Cindy; Kumari-Subaiya, Sheila; Leonard, Cindy; Mohammed, Reyhan; Moczulski, Christopher; Altabet, Jaime; Babb, James; Lewin, Alana; Reig, Beatriu; Moy, Linda; Heacock, Laura; Geras, Krzysztof J
Though consistently shown to detect mammographically occult cancers, breast ultrasound has been noted to have high false-positive rates. In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images. Developed on 288,767 exams, consisting of 5,442,907 B-mode and Color Doppler images, the AI achieves an area under the receiver operating characteristic curve (AUROC) of 0.976 on a test set consisting of 44,755 exams. In a retrospective reader study, the AI achieves a higher AUROC than the average of ten board-certified breast radiologists (AUROC: 0.962 AI, 0.924 ± 0.02 radiologists). With the help of the AI, radiologists decrease their false positive rates by 37.3% and reduce requested biopsies by 27.8%, while maintaining the same level of sensitivity. This highlights the potential of AI in improving the accuracy, consistency, and efficiency of breast ultrasound diagnosis.
PMCID:8463596
PMID: 34561440
ISSN: 2041-1723
CID: 5039442

Lessons from the first DBTex Challenge

Park, Jungkyu; Shoshan, Yoel; Marti, Robert; Gómez del Campo, Pablo; Ratner, Vadim; Khapun, Daniel; Zlotnick, Aviad; Barkan, Ella; Gilboa-Solomon, Flora; ChÅ‚Ä™dowski, Jakub; Witowski, Jan; Millet, Alexandra; Kim, Eric; Lewin, Alana; Pysarenko, Kristine; Chen, Sardius; Goldberg, Julia; Patel, Shalin; Plaunova, Anastasia; Wegener, Melanie; Wolfson, Stacey; Lee, Jiyon; Hava, Sana; Murthy, Sindhoora; Du, Linda; Gaddam, Sushma; Parikh, Ujas; Heacock, Laura; Moy, Linda; Reig, Beatriu; Rosen-Zvi, Michal; Geras, Krzysztof J.
SCOPUS:85111105102
ISSN: 2522-5839
CID: 5000532

Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis

Liu, Kangning; Shen, Yiqiu; Wu, Nan; Chłędowski, Jakub; Fernandez-Granda, Carlos; Geras, Krzysztof J
In the last few years, deep learning classifiers have shown promising results in image-based medical diagnosis. However, interpreting the outputs of these models remains a challenge. In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output, i.e. the location of a lesion. Alternatively, segmentation or detection models can be trained with pixel-wise annotations indicating the locations of malignant lesions. Unfortunately, acquiring such labels is labor-intensive and requires medical expertise. To overcome this difficulty, weakly-supervised localization can be utilized. These methods allow neural network classifiers to output saliency maps highlighting the regions of the input most relevant to the classification task (e.g. malignant lesions in mammograms) using only image-level labels (e.g. whether the patient has cancer or not) during training. When applied to high-resolution images, existing methods produce low-resolution saliency maps. This is problematic in applications in which suspicious lesions are small in relation to the image size. In this work, we introduce a novel neural network architecture to perform weakly-supervised segmentation of high-resolution images. The proposed model selects regions of interest via coarse-level localization, and then performs fine-grained segmentation of those regions. We apply this model to breast cancer diagnosis with screening mammography, and validate it on a large clinically-realistic dataset. Measured by Dice similarity score, our approach outperforms existing methods by a large margin in terms of localization performance of benign and malignant lesions, relatively improving the performance by 39.6% and 20.0%, respectively. Code and the weights of some of the models are available at https://github.com/nyukat/GLAM.
PMCID:8791642
PMID: 35088055
ISSN: 2640-3498
CID: 5154792

A convolutional neural network for common coordinate registration of high-resolution histology images

Daly, Aidan C; Geras, Krzysztof J; Bonneau, Richard A
MOTIVATION/BACKGROUND:Registration of histology images from multiple sources is a pressing problem in large-scale studies of spatial -omics data. Researchers often perform "common coordinate registration," akin to segmentation, in which samples are partitioned based on tissue type to allow for quantitative comparison of similar regions across samples. Accuracy in such registration requires both high image resolution and global awareness, which mark a difficult balancing act for contemporary deep learning architectures. RESULTS:We present a novel convolutional neural network (CNN) architecture that combines (1) a local classification CNN that extracts features from image patches sampled sparsely across the tissue surface, and (2) a global segmentation CNN that operates on these extracted features. This hybrid network can be trained in an end-to-end manner, and we demonstrate its relative merits over competing approaches on a reference histology dataset as well as two published spatial transcriptomics datasets. We believe that this paradigm will greatly enhance our ability to process spatial -omics data, and has general purpose applications for the processing of high-resolution histology images on commercially available GPUs. AVAILABILITY/BACKGROUND:All code is publicly available at https://github.com/flatironinstitute/st_gridnet. SUPPLEMENTARY INFORMATION/BACKGROUND:Supplementary data are available at Bioinformatics online.
PMID: 34128955
ISSN: 1367-4811
CID: 4911582

An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department

Shamout, Farah E; Shen, Yiqiu; Wu, Nan; Kaku, Aakash; Park, Jungkyu; Makino, Taro; Jastrzębski, Stanisław; Witowski, Jan; Wang, Duo; Zhang, Ben; Dogra, Siddhant; Cao, Meng; Razavian, Narges; Kudlowitz, David; Azour, Lea; Moore, William; Lui, Yvonne W; Aphinyanaphongs, Yindalon; Fernandez-Granda, Carlos; Geras, Krzysztof J
During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745-0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.
PMID: 33980980
ISSN: 2398-6352
CID: 4867572

COVID-19 Deterioration Prediction via Self-Supervised Representation Learning and Multi-Image Prediction [PrePrint]

Sriram, Anuroop; Muckley, Matthew; Sinha, Koustuv; Shamout, Farah; Pineau, Joelle; Geras, Krzysztof J; Azour, Lea; Aphinyanaphongs, Yindalon; Yakubova, Nafissa; Moore, William
The rapid spread of COVID-19 cases in recent months has strained hospital resources, making rapid and accurate triage of patients presenting to emergency departments a necessity. Machine learning techniques using clinical data such as chest X-rays have been used to predict which patients are most at risk of deterioration. We consider the task of predicting two types of patient deterioration based on chest X-rays: adverse event deterioration (i.e., transfer to the intensive care unit, intubation, or mortality) and increased oxygen requirements beyond 6 L per day. Due to the relative scarcity of COVID-19 patient data, existing solutions leverage supervised pretraining on related non-COVID images, but this is limited by the differences between the pretraining data and the target COVID-19 patient data. In this paper, we use self-supervised learning based on the momentum contrast (MoCo) method in the pretraining phase to learn more general image representations to use for downstream tasks. We present three results. The first is deterioration prediction from a single image, where our model achieves an area under receiver operating characteristic curve (AUC) of 0.742 for predicting an adverse event within 96 hours (compared to 0.703 with supervised pretraining) and an AUC of 0.765 for predicting oxygen requirements greater than 6 L a day at 24 hours (compared to 0.749 with supervised pretraining). We then propose a new transformer-based architecture that can process sequences of multiple images for prediction and show that this model can achieve an improved AUC of 0.786 for predicting an adverse event at 96 hours and an AUC of 0.848 for predicting mortalities at 96 hours. A small pilot clinical study suggested that the prediction accuracy of our model is comparable to that of experienced radiologists analyzing the same information.
PMCID:7814828
PMID: 33469559
ISSN: 2331-8422
CID: 4760552

An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization

Shen, Yiqiu; Wu, Nan; Phang, Jason; Park, Jungkyu; Liu, Kangning; Tyagi, Sudarshini; Heacock, Laura; Kim, S Gene; Moy, Linda; Cho, Kyunghyun; Geras, Krzysztof J
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.
PMID: 33383334
ISSN: 1361-8423
CID: 4759232

An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department [PrePrint]

Shamout, Farah E; Shen, Yiqiu; Wu, Nan; Kaku, Aakash; Park, Jungkyu; Makino, Taro; Jastrzębski, Stanisław; Wang, Duo; Zhang, Ben; Dogra, Siddhant; Cao, Meng; Razavian, Narges; Kudlowitz, David; Azour, Lea; Moore, William; Lui, Yvonne W; Aphinyanaphongs, Yindalon; Fernandez-Granda, Carlos; Geras, Krzysztof J
During the COVID-19 pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images, and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3,661 patients, achieves an AUC of 0.786 (95% CI: 0.742-0.827) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions, and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at NYU Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.
PMCID:7418753
PMID: 32793769
ISSN: 2331-8422
CID: 4556742

How to Implement AI in the Clinical Enterprise: Opportunities and Lessons Learned

Lui, Yvonne W; Geras, Krzysztof; Block, K Tobias; Parente, Marc; Hood, Joseph; Recht, Michael P
PMID: 33153543
ISSN: 1558-349x
CID: 4671212

Machine learning in breast MRI

Reig, Beatriu; Heacock, Laura; Geras, Krzysztof J; Moy, Linda
Machine-learning techniques have led to remarkable advances in data extraction and analysis of medical imaging. Applications of machine learning to breast MRI continue to expand rapidly as increasingly accurate 3D breast and lesion segmentation allows the combination of radiologist-level interpretation (eg, BI-RADS lexicon), data from advanced multiparametric imaging techniques, and patient-level data such as genetic risk markers. Advances in breast MRI feature extraction have led to rapid dataset analysis, which offers promise in large pooled multiinstitutional data analysis. The object of this review is to provide an overview of machine-learning and deep-learning techniques for breast MRI, including supervised and unsupervised methods, anatomic breast segmentation, and lesion segmentation. Finally, it explores the role of machine learning, current limitations, and future applications to texture analysis, radiomics, and radiogenomics. Level of Evidence: 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2019.
PMID: 31276247
ISSN: 1522-2586
CID: 3968372