Try a new search

Format these results:

Searched for:

person:ys1001

in-biosketch:yes

Total Results:

17


An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department

Shamout, Farah E; Shen, Yiqiu; Wu, Nan; Kaku, Aakash; Park, Jungkyu; Makino, Taro; Jastrzębski, Stanisław; Witowski, Jan; Wang, Duo; Zhang, Ben; Dogra, Siddhant; Cao, Meng; Razavian, Narges; Kudlowitz, David; Azour, Lea; Moore, William; Lui, Yvonne W; Aphinyanaphongs, Yindalon; Fernandez-Granda, Carlos; Geras, Krzysztof J
During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745-0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.
PMID: 33980980
ISSN: 2398-6352
CID: 4867572

An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization

Shen, Yiqiu; Wu, Nan; Phang, Jason; Park, Jungkyu; Liu, Kangning; Tyagi, Sudarshini; Heacock, Laura; Kim, S Gene; Moy, Linda; Cho, Kyunghyun; Geras, Krzysztof J
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.
PMID: 33383334
ISSN: 1361-8423
CID: 4759232

Prediction of Total Knee Replacement and Diagnosis of Osteoarthritis by Using Deep Learning on Knee Radiographs: Data from the Osteoarthritis Initiative

Leung, Kevin; Zhang, Bofei; Tan, Jimin; Shen, Yiqiu; Geras, Krzysztof J; Babb, James S; Cho, Kyunghyun; Chang, Gregory; Deniz, Cem M
Background The methods for assessing knee osteoarthritis (OA) do not provide enough comprehensive information to make robust and accurate outcome predictions. Purpose To develop a deep learning (DL) prediction model for risk of OA progression by using knee radiographs in patients who underwent total knee replacement (TKR) and matched control patients who did not undergo TKR. Materials and Methods In this retrospective analysis that used data from the OA Initiative, a DL model on knee radiographs was developed to predict both the likelihood of a patient undergoing TKR within 9 years and Kellgren-Lawrence (KL) grade. Study participants included a case-control matched subcohort between 45 and 79 years. Patients were matched to control patients according to age, sex, ethnicity, and body mass index. The proposed model used a transfer learning approach based on the ResNet34 architecture with sevenfold nested cross-validation. Receiver operating characteristic curve analysis and conditional logistic regression assessed model performance for predicting probability and risk of TKR compared with clinical observations and two binary outcome prediction models on the basis of radiographic readings: KL grade and OA Research Society International (OARSI) grade. Results Evaluated were 728 participants including 324 patients (mean age, 64 years ± 8 [standard deviation]; 222 women) and 324 control patients (mean age, 64 years ± 8; 222 women). The prediction model based on DL achieved an area under the receiver operating characteristic curve (AUC) of 0.87 (95% confidence interval [CI]: 0.85, 0.90), outperforming a baseline prediction model by using KL grade with an AUC of 0.74 (95% CI: 0.71, 0.77; P < .001). The risk for TKR increased with probability that a person will undergo TKR from the DL model (odds ratio [OR], 7.7; 95% CI: 2.3, 25; P < .001), KL grade (OR, 1.92; 95% CI: 1.17, 3.13; P = .009), and OARSI grade (OR, 1.20; 95% CI: 0.41, 3.50; P = .73). Conclusion The proposed deep learning model better predicted risk of total knee replacement in osteoarthritis than did binary outcome models by using standard grading systems. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Richardson in this issue.
PMID: 32573386
ISSN: 1527-1315
CID: 4492992

Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

Wu, Nan; Phang, Jason; Park, Jungkyu; Shen, Yiqiu; Huang, Zhe; Zorin, Masha; Jastrzebski, Stanislaw; Fevry, Thibault; Katsnelson, Joe; Kim, Eric; Wolfson, Stacey; Parikh, Ujas; Gaddam, Sushma; Lin, Leng Leng Young; Ho, Kara; Weinstein, Joshua D; Reig, Beatriu; Gao, Yiming; Pysarenko, Hildegard Toth Kristine; Lewin, Alana; Lee, Jiyon; Airola, Krystal; Mema, Eralda; Chung, Stephanie; Hwang, Esther; Samreen, Naziya; Kim, S Gene; Heacock, Laura; Moy, Linda; Cho, Kyunghyun; Geras, Krzysztof J
We present a deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). Our network achieves an AUC of 0.895 in predicting the presence of cancer in the breast, when tested on the screening population. We attribute the high accuracy to a few technical advances. (i) Our network's novel two-stage architecture and training procedure, which allows us to use a high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. (ii) A custom ResNet-based network used as a building block of our model, whose balance of depth and width is optimized for high-resolution medical images. (iii) Pretraining the network on screening BI-RADS classification, a related task with more noisy labels. (iv) Combining multiple input views in an optimal way among a number of possible choices. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and show that our model is as accurate as experienced radiologists when presented with the same data. We also show that a hybrid model, averaging the probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To further understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, the model's design, training procedure, errors, and properties of its internal representations. Our best models are publicly available at https://github.com/nyukat/breastcancerclassifier.
PMID: 31603772
ISSN: 1558-254x
CID: 4130202

Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms

Schaffter, Thomas; Buist, Diana S M; Lee, Christoph I; Nikulin, Yaroslav; Ribli, Dezso; Guan, Yuanfang; Lotter, William; Jie, Zequn; Du, Hao; Wang, Sijia; Feng, Jiashi; Feng, Mengling; Kim, Hyo-Eun; Albiol, Francisco; Albiol, Alberto; Morrell, Stephen; Wojna, Zbigniew; Ahsen, Mehmet Eren; Asif, Umar; Jimeno Yepes, Antonio; Yohanandan, Shivanthan; Rabinovici-Cohen, Simona; Yi, Darvin; Hoff, Bruce; Yu, Thomas; Chaibub Neto, Elias; Rubin, Daniel L; Lindholm, Peter; Margolies, Laurie R; McBride, Russell Bailey; Rothstein, Joseph H; Sieh, Weiva; Ben-Ari, Rami; Harrer, Stefan; Trister, Andrew; Friend, Stephen; Norman, Thea; Sahiner, Berkman; Strand, Fredrik; Guinney, Justin; Stolovitzky, Gustavo; Mackey, Lester; Cahoon, Joyce; Shen, Li; Sohn, Jae Ho; Trivedi, Hari; Shen, Yiqiu; Buturovic, Ljubomir; Pereira, Jose Costa; Cardoso, Jaime S; Castro, Eduardo; Kalleberg, Karl Trygve; Pelka, Obioma; Nedjar, Imane; Geras, Krzysztof J; Nensa, Felix; Goan, Ethan; Koitka, Sven; Caballero, Luis; Cox, David D; Krishnaswamy, Pavitra; Pandey, Gaurav; Friedrich, Christoph M; Perrin, Dimitri; Fookes, Clinton; Shi, Bibo; Cardoso Negrie, Gerard; Kawczynski, Michael; Cho, Kyunghyun; Khoo, Can Son; Lo, Joseph Y; Sorensen, A Gregory; Jung, Hwejin
Importance/UNASSIGNED:Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective/UNASSIGNED:To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants/UNASSIGNED:In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements/UNASSIGNED:Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results/UNASSIGNED:Overall, 144 231 screening mammograms from 85 580 US women (952 cancer positive ≤12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 578 examinations from 68 008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance/UNASSIGNED:While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation.
PMID: 32119094
ISSN: 2574-3805
CID: 4340492

Globally-Aware Multiple Instance Classifier for Breast Cancer Screening

Shen, Yiqiu; Wu, Nan; Phang, Jason; Park, Jungkyu; Kim, Gene; Moy, Linda; Cho, Kyunghyun; Geras, Krzysztof J
Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles in medical image analysis tasks. To address these unique properties of medical images, we propose a neural network that is able to classify breast cancer lesions utilizing information from both a global saliency map and multiple local patches. The proposed model outperforms the ResNet-based baseline and achieves radiologist-level performance in the interpretation of screening mammography. Although our model is trained only with image-level labels, it is able to generate pixel-level saliency maps that provide localization of possible malignant findings.
PMCID:7060084
PMID: 32149282
ISSN: n/a
CID: 4349612

Breast density classification with deep convolutional neural networks

Wu, Nan; J.Geras, Krzysztof; Shen, Yiqiu, Su, Jingyi; Kim, S.Gene; Kim, Eric; Wolfson, Stacey, Moy, Linda; Cho, Kyunghyun
ORIGINAL:0017085
ISSN: 2379-190x
CID: 5573552