Try a new search

Format these results:

Searched for:



Total Results:


Lessons from the first DBTex Challenge

Park, Jungkyu; Shoshan, Yoel; Marti, Robert; Gómez del Campo, Pablo; Ratner, Vadim; Khapun, Daniel; Zlotnick, Aviad; Barkan, Ella; Gilboa-Solomon, Flora; ChÅ‚Ä™dowski, Jakub; Witowski, Jan; Millet, Alexandra; Kim, Eric; Lewin, Alana; Pysarenko, Kristine; Chen, Sardius; Goldberg, Julia; Patel, Shalin; Plaunova, Anastasia; Wegener, Melanie; Wolfson, Stacey; Lee, Jiyon; Hava, Sana; Murthy, Sindhoora; Du, Linda; Gaddam, Sushma; Parikh, Ujas; Heacock, Laura; Moy, Linda; Reig, Beatriu; Rosen-Zvi, Michal; Geras, Krzysztof J.
ISSN: 2522-5839
CID: 5000532

Can an Artificial Intelligence Decision Aid Decrease False-Positive Breast Biopsies?

Heller, Samantha L; Wegener, Melanie; Babb, James S; Gao, Yiming
ABSTRACT/UNASSIGNED:This study aimed to evaluate the effect of an artificial intelligence (AI) support system on breast ultrasound diagnostic accuracy.In this Health Insurance Portability and Accountability Act-compliant, institutional review board-approved retrospective study, 200 lesions (155 benign, 45 malignant) were randomly selected from consecutive ultrasound-guided biopsies (June 2017-January 2019). Two readers, blinded to clinical history and pathology, evaluated lesions with and without an Food and Drug Administration-approved AI software. Lesion features, Breast Imaging Reporting and Data System (BI-RADS) rating (1-5), reader confidence level (1-5), and AI BI-RADS equivalent (1-5) were recorded. Statistical analysis was performed for diagnostic accuracy, negative predictive value, positive predictive value (PPV), sensitivity, and specificity of reader versus AI BI-RADS. Generalized estimating equation analysis was used for reader versus AI accuracy regarding lesion features and AI impact on low-confidence score lesions. Artificial intelligence effect on false-positive biopsy rate was determined. Statistical tests were conducted at a 2-sided 5% significance level.There was no significant difference in accuracy (73 vs 69.8%), negative predictive value (100% vs 98.5%), PPV (45.5 vs 42.4%), sensitivity (100% vs 96.7%), and specificity (65.2 vs 61.9; P = 0.118-0.409) for AI versus pooled reader assessment. Artificial intelligence was more accurate than readers for irregular shape (74.1% vs 57.4%, P = 0.002) and less accurate for round shape (26.5% vs 50.0%, P = 0.049). Artificial intelligence improved diagnostic accuracy for reader-rated low-confidence lesions with increased PPV (24.7% AI vs 19.3%, P = 0.004) and specificity (57.8% vs 44.6%, P = 0.008).Artificial intelligence decision support aid may help improve sonographic diagnostic accuracy, particularly in cases with low reader confidence, thereby decreasing false-positives.
PMID: 33394994
ISSN: 1536-0253
CID: 4738582

The Relative Value Unit: History, Current Use, and Controversies

Baadh, Amanjit; Peterkin, Yuri; Wegener, Melanie; Flug, Jonathan; Katz, Douglas; Hoffmann, Jason C
The relative value unit (RVU) is an important measuring tool for the work performed by physicians, and is currently used in the United States to calculate physician reimbursement. An understanding of radiology RVUs and current procedural terminology codes is important for radiologists, trainees, radiology managers, and administrators, as this knowledge would help them to understand better their current productivity and reimbursement, as well as controversies regarding reimbursement, and permit them to adapt to reimbursement changes that may occur in the future. This article reviews the components of the RVU and how radiology payment is calculated, highlights trends in RVUs and resultant payment for diagnostic and therapeutic imaging and examinations, and discusses current issues involving RVU and current procedural terminology codes.
PMID: 26545579
ISSN: 1535-6302
CID: 3001902