Try a new search

Format these results:

Searched for:

person:wegenm01

in-biosketch:true

Total Results:

4


An efficient deep neural network to classify large 3D images with small objects

Park, Jungkyu; Chledowski, Jakub; Jastrzebski, Stanislaw; Witowski, Jan; Xu, Yanqi; Du, Linda; Gaddam, Sushma; Kim, Eric; Lewin, Alana; Parikh, Ujas; Plaunova, Anastasia; Chen, Sardius; Millet, Alexandra; Park, James; Pysarenko, Kristine; Patel, Shalin; Goldberg, Julia; Wegener, Melanie; Moy, Linda; Heacock, Laura; Reig, Beatriu; Geras, Krzysztof J
3D imaging enables accurate diagnosis by providing spatial information about organ anatomy. However, using 3D images to train AI models is computationally challenging because they consist of 10x or 100x more pixels than their 2D counterparts. To be trained with high-resolution 3D images, convolutional neural networks resort to downsampling them or projecting them to 2D. We propose an effective alternative, a neural network that enables efficient classification of full-resolution 3D medical images. Compared to off-the-shelf convolutional neural networks, our network, 3D Globally-Aware Multiple Instance Classifier (3D-GMIC), uses 77.98%-90.05% less GPU memory and 91.23%-96.02% less computation. While it is trained only with image-level labels, without segmentation labels, it explains its predictions by providing pixel-level saliency maps. On a dataset collected at NYU Langone Health, including 85,526 patients with full-field 2D mammography (FFDM), synthetic 2D mammography, and 3D mammography, 3D-GMIC achieves an AUC of 0.831 (95% CI: 0.769-0.887) in classifying breasts with malignant findings using 3D mammography. This is comparable to the performance of GMIC on FFDM (0.816, 95% CI: 0.737-0.878) and synthetic 2D (0.826, 95% CI: 0.754-0.884), which demonstrates that 3D-GMIC successfully classified large 3D images despite focusing computation on a smaller percentage of its input compared to GMIC. Therefore, 3D-GMIC identifies and utilizes extremely small regions of interest from 3D images consisting of hundreds of millions of pixels, dramatically reducing associated computational challenges. 3D-GMIC generalizes well to BCS-DBT, an external dataset from Duke University Hospital, achieving an AUC of 0.848 (95% CI: 0.798-0.896).
PMID: 37590109
ISSN: 1558-254x
CID: 5588742

Lessons from the first DBTex Challenge

Park, Jungkyu; Shoshan, Yoel; Marti, Robert; Gómez del Campo, Pablo; Ratner, Vadim; Khapun, Daniel; Zlotnick, Aviad; Barkan, Ella; Gilboa-Solomon, Flora; ChÅ‚Ä™dowski, Jakub; Witowski, Jan; Millet, Alexandra; Kim, Eric; Lewin, Alana; Pysarenko, Kristine; Chen, Sardius; Goldberg, Julia; Patel, Shalin; Plaunova, Anastasia; Wegener, Melanie; Wolfson, Stacey; Lee, Jiyon; Hava, Sana; Murthy, Sindhoora; Du, Linda; Gaddam, Sushma; Parikh, Ujas; Heacock, Laura; Moy, Linda; Reig, Beatriu; Rosen-Zvi, Michal; Geras, Krzysztof J.
SCOPUS:85111105102
ISSN: 2522-5839
CID: 5000532

Can an Artificial Intelligence Decision Aid Decrease False-Positive Breast Biopsies?

Heller, Samantha L; Wegener, Melanie; Babb, James S; Gao, Yiming
ABSTRACT/UNASSIGNED:This study aimed to evaluate the effect of an artificial intelligence (AI) support system on breast ultrasound diagnostic accuracy.In this Health Insurance Portability and Accountability Act-compliant, institutional review board-approved retrospective study, 200 lesions (155 benign, 45 malignant) were randomly selected from consecutive ultrasound-guided biopsies (June 2017-January 2019). Two readers, blinded to clinical history and pathology, evaluated lesions with and without an Food and Drug Administration-approved AI software. Lesion features, Breast Imaging Reporting and Data System (BI-RADS) rating (1-5), reader confidence level (1-5), and AI BI-RADS equivalent (1-5) were recorded. Statistical analysis was performed for diagnostic accuracy, negative predictive value, positive predictive value (PPV), sensitivity, and specificity of reader versus AI BI-RADS. Generalized estimating equation analysis was used for reader versus AI accuracy regarding lesion features and AI impact on low-confidence score lesions. Artificial intelligence effect on false-positive biopsy rate was determined. Statistical tests were conducted at a 2-sided 5% significance level.There was no significant difference in accuracy (73 vs 69.8%), negative predictive value (100% vs 98.5%), PPV (45.5 vs 42.4%), sensitivity (100% vs 96.7%), and specificity (65.2 vs 61.9; P = 0.118-0.409) for AI versus pooled reader assessment. Artificial intelligence was more accurate than readers for irregular shape (74.1% vs 57.4%, P = 0.002) and less accurate for round shape (26.5% vs 50.0%, P = 0.049). Artificial intelligence improved diagnostic accuracy for reader-rated low-confidence lesions with increased PPV (24.7% AI vs 19.3%, P = 0.004) and specificity (57.8% vs 44.6%, P = 0.008).Artificial intelligence decision support aid may help improve sonographic diagnostic accuracy, particularly in cases with low reader confidence, thereby decreasing false-positives.
PMID: 33394994
ISSN: 1536-0253
CID: 4738582

The Relative Value Unit: History, Current Use, and Controversies

Baadh, Amanjit; Peterkin, Yuri; Wegener, Melanie; Flug, Jonathan; Katz, Douglas; Hoffmann, Jason C
The relative value unit (RVU) is an important measuring tool for the work performed by physicians, and is currently used in the United States to calculate physician reimbursement. An understanding of radiology RVUs and current procedural terminology codes is important for radiologists, trainees, radiology managers, and administrators, as this knowledge would help them to understand better their current productivity and reimbursement, as well as controversies regarding reimbursement, and permit them to adapt to reimbursement changes that may occur in the future. This article reviews the components of the RVU and how radiology payment is calculated, highlights trends in RVUs and resultant payment for diagnostic and therapeutic imaging and examinations, and discusses current issues involving RVU and current procedural terminology codes.
PMID: 26545579
ISSN: 1535-6302
CID: 3001902