Try a new search

Format these results:

Searched for:

in-biosketch:yes

person:kjg5

Total Results:

35


Machine learning in breast MRI

Reig, Beatriu; Heacock, Laura; Geras, Krzysztof J; Moy, Linda
Machine-learning techniques have led to remarkable advances in data extraction and analysis of medical imaging. Applications of machine learning to breast MRI continue to expand rapidly as increasingly accurate 3D breast and lesion segmentation allows the combination of radiologist-level interpretation (eg, BI-RADS lexicon), data from advanced multiparametric imaging techniques, and patient-level data such as genetic risk markers. Advances in breast MRI feature extraction have led to rapid dataset analysis, which offers promise in large pooled multiinstitutional data analysis. The object of this review is to provide an overview of machine-learning and deep-learning techniques for breast MRI, including supervised and unsupervised methods, anatomic breast segmentation, and lesion segmentation. Finally, it explores the role of machine learning, current limitations, and future applications to texture analysis, radiomics, and radiogenomics. Level of Evidence: 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2019.
PMID: 31276247
ISSN: 1522-2586
CID: 3968372

Prediction of Total Knee Replacement and Diagnosis of Osteoarthritis by Using Deep Learning on Knee Radiographs: Data from the Osteoarthritis Initiative

Leung, Kevin; Zhang, Bofei; Tan, Jimin; Shen, Yiqiu; Geras, Krzysztof J; Babb, James S; Cho, Kyunghyun; Chang, Gregory; Deniz, Cem M
Background The methods for assessing knee osteoarthritis (OA) do not provide enough comprehensive information to make robust and accurate outcome predictions. Purpose To develop a deep learning (DL) prediction model for risk of OA progression by using knee radiographs in patients who underwent total knee replacement (TKR) and matched control patients who did not undergo TKR. Materials and Methods In this retrospective analysis that used data from the OA Initiative, a DL model on knee radiographs was developed to predict both the likelihood of a patient undergoing TKR within 9 years and Kellgren-Lawrence (KL) grade. Study participants included a case-control matched subcohort between 45 and 79 years. Patients were matched to control patients according to age, sex, ethnicity, and body mass index. The proposed model used a transfer learning approach based on the ResNet34 architecture with sevenfold nested cross-validation. Receiver operating characteristic curve analysis and conditional logistic regression assessed model performance for predicting probability and risk of TKR compared with clinical observations and two binary outcome prediction models on the basis of radiographic readings: KL grade and OA Research Society International (OARSI) grade. Results Evaluated were 728 participants including 324 patients (mean age, 64 years ± 8 [standard deviation]; 222 women) and 324 control patients (mean age, 64 years ± 8; 222 women). The prediction model based on DL achieved an area under the receiver operating characteristic curve (AUC) of 0.87 (95% confidence interval [CI]: 0.85, 0.90), outperforming a baseline prediction model by using KL grade with an AUC of 0.74 (95% CI: 0.71, 0.77; P < .001). The risk for TKR increased with probability that a person will undergo TKR from the DL model (odds ratio [OR], 7.7; 95% CI: 2.3, 25; P < .001), KL grade (OR, 1.92; 95% CI: 1.17, 3.13; P = .009), and OARSI grade (OR, 1.20; 95% CI: 0.41, 3.50; P = .73). Conclusion The proposed deep learning model better predicted risk of total knee replacement in osteoarthritis than did binary outcome models by using standard grading systems. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Richardson in this issue.
PMID: 32573386
ISSN: 1527-1315
CID: 4492992

Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

Wu, Nan; Phang, Jason; Park, Jungkyu; Shen, Yiqiu; Huang, Zhe; Zorin, Masha; Jastrzebski, Stanislaw; Fevry, Thibault; Katsnelson, Joe; Kim, Eric; Wolfson, Stacey; Parikh, Ujas; Gaddam, Sushma; Lin, Leng Leng Young; Ho, Kara; Weinstein, Joshua D; Reig, Beatriu; Gao, Yiming; Pysarenko, Hildegard Toth Kristine; Lewin, Alana; Lee, Jiyon; Airola, Krystal; Mema, Eralda; Chung, Stephanie; Hwang, Esther; Samreen, Naziya; Kim, S Gene; Heacock, Laura; Moy, Linda; Cho, Kyunghyun; Geras, Krzysztof J
We present a deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). Our network achieves an AUC of 0.895 in predicting the presence of cancer in the breast, when tested on the screening population. We attribute the high accuracy to a few technical advances. (i) Our network's novel two-stage architecture and training procedure, which allows us to use a high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. (ii) A custom ResNet-based network used as a building block of our model, whose balance of depth and width is optimized for high-resolution medical images. (iii) Pretraining the network on screening BI-RADS classification, a related task with more noisy labels. (iv) Combining multiple input views in an optimal way among a number of possible choices. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and show that our model is as accurate as experienced radiologists when presented with the same data. We also show that a hybrid model, averaging the probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To further understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, the model's design, training procedure, errors, and properties of its internal representations. Our best models are publicly available at https://github.com/nyukat/breastcancerclassifier.
PMID: 31603772
ISSN: 1558-254x
CID: 4130202

Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms

Schaffter, Thomas; Buist, Diana S M; Lee, Christoph I; Nikulin, Yaroslav; Ribli, Dezso; Guan, Yuanfang; Lotter, William; Jie, Zequn; Du, Hao; Wang, Sijia; Feng, Jiashi; Feng, Mengling; Kim, Hyo-Eun; Albiol, Francisco; Albiol, Alberto; Morrell, Stephen; Wojna, Zbigniew; Ahsen, Mehmet Eren; Asif, Umar; Jimeno Yepes, Antonio; Yohanandan, Shivanthan; Rabinovici-Cohen, Simona; Yi, Darvin; Hoff, Bruce; Yu, Thomas; Chaibub Neto, Elias; Rubin, Daniel L; Lindholm, Peter; Margolies, Laurie R; McBride, Russell Bailey; Rothstein, Joseph H; Sieh, Weiva; Ben-Ari, Rami; Harrer, Stefan; Trister, Andrew; Friend, Stephen; Norman, Thea; Sahiner, Berkman; Strand, Fredrik; Guinney, Justin; Stolovitzky, Gustavo; Mackey, Lester; Cahoon, Joyce; Shen, Li; Sohn, Jae Ho; Trivedi, Hari; Shen, Yiqiu; Buturovic, Ljubomir; Pereira, Jose Costa; Cardoso, Jaime S; Castro, Eduardo; Kalleberg, Karl Trygve; Pelka, Obioma; Nedjar, Imane; Geras, Krzysztof J; Nensa, Felix; Goan, Ethan; Koitka, Sven; Caballero, Luis; Cox, David D; Krishnaswamy, Pavitra; Pandey, Gaurav; Friedrich, Christoph M; Perrin, Dimitri; Fookes, Clinton; Shi, Bibo; Cardoso Negrie, Gerard; Kawczynski, Michael; Cho, Kyunghyun; Khoo, Can Son; Lo, Joseph Y; Sorensen, A Gregory; Jung, Hwejin
Importance/UNASSIGNED:Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective/UNASSIGNED:To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants/UNASSIGNED:In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements/UNASSIGNED:Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results/UNASSIGNED:Overall, 144 231 screening mammograms from 85 580 US women (952 cancer positive ≤12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 578 examinations from 68 008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance/UNASSIGNED:While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation.
PMID: 32119094
ISSN: 2574-3805
CID: 4340492

Artificial Intelligence Explained for Nonexperts

Razavian, Narges; Knoll, Florian; Geras, Krzysztof J
Artificial intelligence (AI) has made stunning progress in the last decade, made possible largely due to the advances in training deep neural networks with large data sets. Many of these solutions, initially developed for natural images, speech, or text, are now becoming successful in medical imaging. In this article we briefly summarize in an accessible way the current state of the field of AI. Furthermore, we highlight the most promising approaches and describe the current challenges that will need to be solved to enable broad deployment of AI in clinical practice.
PMID: 31991447
ISSN: 1098-898x
CID: 4294102

fastMRI: A Publicly Available Raw k-Space and DICOM Dataset of Knee Images for Accelerated MR Image Reconstruction Using Machine Learning

Knoll, Florian; Zbontar, Jure; Sriram, Anuroop; Muckley, Matthew J; Bruno, Mary; Defazio, Aaron; Parente, Marc; Geras, Krzysztof J; Katsnelson, Joe; Chandarana, Hersh; Zhang, Zizhao; Drozdzalv, Michal; Romero, Adriana; Rabbat, Michael; Vincent, Pascal; Pinkerton, James; Wang, Duo; Yakubova, Nafissa; Owens, Erich; Zitnick, C Lawrence; Recht, Michael P; Sodickson, Daniel K; Lui, Yvonne W
A publicly available dataset containing k-space data as well as Digital Imaging and Communications in Medicine image data of knee images for accelerated MR image reconstruction using machine learning is presented.
PMCID:6996599
PMID: 32076662
ISSN: 2638-6100
CID: 4312462

Classifier-agnostic saliency map extraction

Zolna, Konrad; Geras, Krzysztof J.; Cho, Kyunghyun
ISI:000540215400004
ISSN: 1077-3142
CID: 4525342

Globally-Aware Multiple Instance Classifier for Breast Cancer Screening

Shen, Yiqiu; Wu, Nan; Phang, Jason; Park, Jungkyu; Kim, Gene; Moy, Linda; Cho, Kyunghyun; Geras, Krzysztof J
Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles in medical image analysis tasks. To address these unique properties of medical images, we propose a neural network that is able to classify breast cancer lesions utilizing information from both a global saliency map and multiple local patches. The proposed model outperforms the ResNet-based baseline and achieves radiologist-level performance in the interpretation of screening mammography. Although our model is trained only with image-level labels, it is able to generate pixel-level saliency maps that provide localization of possible malignant findings.
PMCID:7060084
PMID: 32149282
ISSN: n/a
CID: 4349612

Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives

Geras, Krzysztof J; Mann, Ritse M; Moy, Linda
Although computer-aided diagnosis (CAD) is widely used in mammography, conventional CAD programs that use prompts to indicate potential cancers on the mammograms have not led to an improvement in diagnostic accuracy. Because of the advances in machine learning, especially with use of deep (multilayered) convolutional neural networks, artificial intelligence has undergone a transformation that has improved the quality of the predictions of the models. Recently, such deep learning algorithms have been applied to mammography and digital breast tomosynthesis (DBT). In this review, the authors explain how deep learning works in the context of mammography and DBT and define the important technical challenges. Subsequently, they discuss the current status and future perspectives of artificial intelligence-based clinical applications for mammography, DBT, and radiomics. Available algorithms are advanced and approach the performance of radiologists-especially for cancer detection and risk prediction at mammography. However, clinical validation is largely lacking, and it is not clear how the power of deep learning should be used to optimize practice. Further development of deep learning models is necessary for DBT, and this requires collection of larger databases. It is expected that deep learning will eventually have an important role in DBT, including the generation of synthetic images.
PMID: 31549948
ISSN: 1527-1315
CID: 4105432

New Frontiers: An Update on Computer-Aided Diagnosis for Breast Imaging in the Age of Artificial Intelligence

Gao, Yiming; Geras, Krzysztof J; Lewin, Alana A; Moy, Linda
OBJECTIVE:The purpose of this article is to compare traditional versus machine learning-based computer-aided detection (CAD) platforms in breast imaging with a focus on mammography, to underscore limitations of traditional CAD, and to highlight potential solutions in new CAD systems under development for the future. CONCLUSION/CONCLUSIONS:CAD development for breast imaging is undergoing a paradigm shift based on vast improvement of computing power and rapid emergence of advanced deep learning algorithms, heralding new systems that may hold real potential to improve clinical care.
PMID: 30667309
ISSN: 1546-3141
CID: 3609912