Searched for: in-biosketch:yes
person:kjg5
Classifier-Agnostic saliency map extraction
Chapter by: Zolna, Konrad; Geras, Krzysztof J.; Cho, Kyunghyun
in: 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 by
[S.l.] : AAAI Press, 2019
pp. 10087-10088
ISBN: 9781577358091
CID: 4613102
fastMRI: An Open Dataset and Benchmarks for Accelerated MRI [PrePrint]
Zbontar, Jure; Knoll, Florian; Sriram, Anuroop; Murrell, Tullie; Huang, Zhengnan; Muckley, Matthew J; Defazio, Aaron; Stern, Ruben; Johnson, Patricia; Bruno, Mary; Parente, Marc; Geras, Krzysztof J; Katsnelson, Joe; Chandarana, Hersh; Zhang, Zizhao; Drozdzal, Michal; Romero, Adirana; Rabbat, Michael; Vincent, Pascal; Yakubova, Nafissa; Pinkerton, James; Wang, Duo; Owens, Erich; Zitnick, C Lawrence; Recht, Michael P; Sodickson, Daniel K; Lui, Yvonne W
Accelerating Magnetic Resonance Imaging (MRI) by taking fewer measurements has the potential to reduce medical costs, minimize stress to patients and make MRI possible in applications where it is currently prohibitively slow or expensive. We introduce the fastMRI dataset, a large-scale collection of both raw MR measurements and clinical MR images, that can be used for training and evaluation of machine-learning approaches to MR image reconstruction. By introducing standardized evaluation criteria and a freely-accessible dataset, our goal is to help the community make rapid advances in the state of the art for MR image reconstruction. We also provide a self-contained introduction to MRI for machine learning researchers with no medical imaging background
ORIGINAL:0014686
ISSN: 2331-8422
CID: 4534312
Large-scale classification of breast MRI exams using deep convolutional networks [Meeting Abstract]
Gong, Shizhan; Muckley, Matthew; Wu, Nan; Makino, Taro; Kim, S. Gene; Heacock, Laura; Moy, Linda; Knoll, Florian; Geras, Krzysztof J
ORIGINAL:0014731
ISSN: 1049-5258
CID: 4668952
BREAST DENSITY CLASSIFICATION WITH DEEP CONVOLUTIONAL NEURAL NETWORKS
Chapter by: Wu, Nan; Geras, Krzysztof J.; Shen, Yiqiu; Su, Jingyi; Kim, Gene; Kim, Eric; Wolfson, Stacey; Moy, Linda; Cho, Kyunghyun
in: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) by
NEW YORK : IEEE, 2018
pp. 6682-6686
ISBN: 978-1-5386-4658-8
CID: 3496792
High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks [PrePrint]
Geras, KJ; Wolfson, S; Kim, SG; Moy, L; Cho, Kyunghyun
Recent advances in deep learning for object recognition in natural images has prompted a surge of interest in applying a similar set of techniques to medical images. Most of the initial attempts largely focused on replacing the input to such a deep convolutional neural network from a natural image to a medical image. This, however, does not take into consideration the fundamental differences between these two types of data. More specifically, detection or recognition of an anomaly in medical images depends significantly on fine details, unlike object recognition in natural images where coarser, more global structures matter more. This difference makes it inadequate to use the existing deep convolutional neural networks architectures, which were developed for natural images, because they rely on heavily downsampling an image to a much lower resolution to reduce the memory requirements. This hides details necessary to make accurate predictions for medical images. Furthermore, a single exam in medical imaging often comes with a set of different views which must be seamlessly fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of more than one high-resolution medical image. We evaluate this network on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 103 thousand images. We focus on investigating the impact of training set sizes and image sizes on the prediction accuracy. Our results highlight that performance clearly increases with the size of training set, and that the best performance can only be achieved using the images in the original resolution. This suggests the future direction of medical imaging research using deep neural networks is to utilize as much data as possible with the least amount of potentially harmful preprocessing
ORIGINAL:0012536
ISSN: 2331-8422
CID: 3019022