Searched for: in-biosketch:true
person:moyl02
Ethical considerations of preclinical models in imaging research [Letter]
Garza-Villarreal, Eduardo A; Moy, Linda; Mao, Hui; Hussain, Tarique; Lupo, Janine M; Fleischer, Candace C; Scott, Andrew D
PMID: 37984415
ISSN: 1522-2594
CID: 5608302
Screening mammographic performance by race and age in the National Mammography Database: 29,479,665 screening mammograms from 13,181,241 women
Lee, Cindy S; Goldman, Lenka; Grimm, Lars J; Liu, Ivy Xinyue; Simanowith, Michael; Rosenberg, Robert; Zuley, Margarita; Moy, Linda
PURPOSE/OBJECTIVE:There are insufficient large-scale studies comparing the performance of screening mammography in women of different races. This study aims to compare the screening performance metrics across racial and age groups in the National Mammography Database (NMD). METHODS:). RESULTS:. CONCLUSIONS:with advancing age. African American women have poorer outcomes from screening mammography (higher RR and lower CDR), compared to White and all women in the NMD. Racial disparity can be partly explained by higher rate of African American women lost to follow up.
PMID: 37897646
ISSN: 1573-7217
CID: 5624292
Evaluation of Diffusion Tensor Imaging Analysis Along the Perivascular Space as a Marker of the Glymphatic System [Editorial]
Haller, Sven; Moy, Linda; Anzai, Yoshimi
PMID: 38289215
ISSN: 1527-1315
CID: 5627472
Breast cancer outcomes based on method of detection in community-based breast cancer registry
Bennett, Debbie Lee; Winter, Andrea Marie; Billadello, Laura; Lowdermilk, Mary Catherine; Doherty, Christina Michelle; Kazmi, Sakina; Laster, Sydney; Al-Hammadi, Noor; Hardy, Anna; Kopans, Daniel B; Moy, Linda
PURPOSE/OBJECTIVE:The impact of opportunistic screening mammography in the United States is difficult to quantify, partially due to lack of inclusion regarding method of detection (MOD) in national registries. This study sought to determine the feasibility of MOD collection in a multicenter community registry and to compare outcomes and characteristics of breast cancer based on MOD. METHODS:We conducted a retrospective study of breast cancer patients from a multicenter tumor registry in Missouri from January 2004 - December 2018. Registry data were extracted by certified tumor registrars and included MOD, clinicopathologic information, and treatment. MOD was assigned as screen-detected or clinically detected. Data were analyzed at the patient level. Chi-squared tests were used for categorical variable comparison and Mann-Whitney-U test was used for numerical variable comparison. RESULTS:5351 women (median age, 63 years; interquartile range, 53-73 years) were included. Screen-detected cancers were smaller than clinically detected cancers (median size 12 mm vs. 25 mm; P < .001) and more likely node-negative (81% vs. 54%; P < .001), lower grade (P < .001), and lower stage (P < .001). Screen-detected cancers were more likely treated with lumpectomy vs. mastectomy (73% vs. 41%; P < .001) and less likely to require chemotherapy (24% vs. 52%; P < .001). Overall survival for patients with invasive breast cancer was higher for screen-detected cancers (89% vs. 74%, P < .0001). CONCLUSION/CONCLUSIONS:MOD can be routinely collected and linked to breast cancer outcomes through tumor registries, with demonstration of significant differences in outcome and characteristics of breast cancers based on MOD. Routine inclusion of MOD in US tumor registries would help quantify the impact of opportunistic screening mammography in the US.
PMID: 37878149
ISSN: 1573-7217
CID: 5626432
An efficient deep neural network to classify large 3D images with small objects
Park, Jungkyu; Chledowski, Jakub; Jastrzebski, Stanislaw; Witowski, Jan; Xu, Yanqi; Du, Linda; Gaddam, Sushma; Kim, Eric; Lewin, Alana; Parikh, Ujas; Plaunova, Anastasia; Chen, Sardius; Millet, Alexandra; Park, James; Pysarenko, Kristine; Patel, Shalin; Goldberg, Julia; Wegener, Melanie; Moy, Linda; Heacock, Laura; Reig, Beatriu; Geras, Krzysztof J
3D imaging enables accurate diagnosis by providing spatial information about organ anatomy. However, using 3D images to train AI models is computationally challenging because they consist of 10x or 100x more pixels than their 2D counterparts. To be trained with high-resolution 3D images, convolutional neural networks resort to downsampling them or projecting them to 2D. We propose an effective alternative, a neural network that enables efficient classification of full-resolution 3D medical images. Compared to off-the-shelf convolutional neural networks, our network, 3D Globally-Aware Multiple Instance Classifier (3D-GMIC), uses 77.98%-90.05% less GPU memory and 91.23%-96.02% less computation. While it is trained only with image-level labels, without segmentation labels, it explains its predictions by providing pixel-level saliency maps. On a dataset collected at NYU Langone Health, including 85,526 patients with full-field 2D mammography (FFDM), synthetic 2D mammography, and 3D mammography, 3D-GMIC achieves an AUC of 0.831 (95% CI: 0.769-0.887) in classifying breasts with malignant findings using 3D mammography. This is comparable to the performance of GMIC on FFDM (0.816, 95% CI: 0.737-0.878) and synthetic 2D (0.826, 95% CI: 0.754-0.884), which demonstrates that 3D-GMIC successfully classified large 3D images despite focusing computation on a smaller percentage of its input compared to GMIC. Therefore, 3D-GMIC identifies and utilizes extremely small regions of interest from 3D images consisting of hundreds of millions of pixels, dramatically reducing associated computational challenges. 3D-GMIC generalizes well to BCS-DBT, an external dataset from Duke University Hospital, achieving an AUC of 0.848 (95% CI: 0.798-0.896).
PMID: 37590109
ISSN: 1558-254x
CID: 5588742
Problem-solving Breast MRI
Reig, Beatriu; Kim, Eric; Chhor, Chloe M; Moy, Linda; Lewin, Alana A; Heacock, Laura
Breast MRI has high sensitivity and negative predictive value, making it well suited to problem solving when other imaging modalities or physical examinations yield results that are inconclusive for the presence of breast cancer. Indications for problem-solving MRI include equivocal or uncertain imaging findings at mammography and/or US; suspicious nipple discharge or skin changes suspected to represent an abnormality when conventional imaging results are negative for cancer; lesions categorized as Breast Imaging Reporting and Data System 4, which are not amenable to biopsy; and discordant radiologic-pathologic findings after biopsy. MRI should not precede or replace careful diagnostic workup with mammography and US and should not be used when a biopsy can be safely performed. The role of MRI in characterizing calcifications is controversial, and management of calcifications should depend on their mammographic appearance because ductal carcinoma in situ may not appear enhancing on MR images. In addition, ductal carcinoma in situ detected solely with MRI is not associated with a higher likelihood of an upgrade to invasive cancer compared with ductal carcinoma in situ detected with other modalities. MRI for triage of high-risk lesions is a subject of ongoing investigation, with a possible future role for MRI in decreasing excisional biopsies. The accuracy of MRI is likely to increase with the use of advanced techniques such as deep learning, which will likely expand the indications for problem-solving MRI. ©RSNA, 2023 Quiz questions for this article are available in the supplemental material.
PMID: 37733618
ISSN: 1527-1323
CID: 5588732
AI-Enhanced PET and MR Imaging for Patients with Breast Cancer
Romeo, Valeria; Moy, Linda; Pinker, Katja
New challenges are currently faced by clinical and surgical oncologists in the management of patients with breast cancer, mainly related to the need for molecular and prognostic data. Recent technological advances in diagnostic imaging and informatics have led to the introduction of functional imaging modalities, such as hybrid PET/MR imaging, and artificial intelligence (AI) software, aimed at the extraction of quantitative radiomics data, which may reflect tumor biology and behavior. In this article, the most recent applications of radiomics and AI to PET/MR imaging are described to address the new needs of clinical and surgical oncology.
PMID: 37336693
ISSN: 1879-9809
CID: 5542572
Breast Cancer Screening for Women at Higher-Than-Average Risk: Updated Recommendations From the ACR
Monticciolo, Debra L; Newell, Mary S; Moy, Linda; Lee, Cindy S; Destounis, Stamatia V
Early detection decreases breast cancer death. The ACR recommends annual screening beginning at age 40 for women of average risk and earlier and/or more intensive screening for women at higher-than-average risk. For most women at higher-than-average risk, the supplemental screening method of choice is breast MRI. Women with genetics-based increased risk, those with a calculated lifetime risk of 20% or more, and those exposed to chest radiation at young ages are recommended to undergo MRI surveillance starting at ages 25 to 30 and annual mammography (with a variable starting age between 25 and 40, depending on the type of risk). Mutation carriers can delay mammographic screening until age 40 if annual screening breast MRI is performed as recommended. Women diagnosed with breast cancer before age 50 or with personal histories of breast cancer and dense breasts should undergo annual supplemental breast MRI. Others with personal histories, and those with atypia at biopsy, should strongly consider MRI screening, especially if other risk factors are present. For women with dense breasts who desire supplemental screening, breast MRI is recommended. For those who qualify for but cannot undergo breast MRI, contrast-enhanced mammography or ultrasound could be considered. All women should undergo risk assessment by age 25, especially Black women and women of Ashkenazi Jewish heritage, so that those at higher-than-average risk can be identified and appropriate screening initiated.
PMID: 37150275
ISSN: 1558-349x
CID: 5544422
PACS-integrated machine learning breast density classifier: clinical validation
Lewin, John; Schoenherr, Sven; Seebass, Martin; Lin, MingDe; Philpotts, Liane; Etesami, Maryam; Butler, Reni; Durand, Melissa; Heller, Samantha; Heacock, Laura; Moy, Linda; Tocino, Irena; Westerhoff, Malte
OBJECTIVE:To test the performance of a novel machine learning-based breast density tool. The tool utilizes a convolutional neural network to predict the BI-RADS based density assessment of a study. The clinical density assessments of 33,000 mammographic examinations (164,000 images) from one academic medical center (Site A) were used for training. MATERIALS AND METHODS/METHODS:This was an IRB approved HIPAA compliant study performed at two academic medical centers. The validation data set was composed of 500 studies from one site (Site A) and 700 from another (Site B). At Site A, each study was assessed by three breast radiologists and the majority (consensus) assessment was used as truth. At Site B, if the tool agreed with the clinical reading, then it was considered to have correctly predicted the clinical reading. In cases where the tool and the clinical reading disagreed, then the study was evaluated by three radiologists and the consensus reading was used as the clinical reading. RESULTS:For the classification into the four categories of the Breast Imaging Reporting and Data System (BI-RADS®), the AI classifier had an accuracy of 84.6% at Site A and 89.7% at Site B. For binary classification (dense vs. non-dense), the AI classifier had an accuracy of 94.4% at Site A and 97.4% at Site B. In no case did the classifier disagree with the consensus reading by more than one category. CONCLUSIONS:The automated breast density tool showed high agreement with radiologists' assessments of breast density.
PMID: 37421715
ISSN: 1873-4499
CID: 5539562
Improving Information Extraction from Pathology Reports using Named Entity Recognition
Zeng, Ken G; Dutt, Tarun; Witowski, Jan; Kranthi Kiran, G V; Yeung, Frank; Kim, Michelle; Kim, Jesi; Pleasure, Mitchell; Moczulski, Christopher; Lopez, L Julian Lechuga; Zhang, Hao; Harbi, Mariam Al; Shamout, Farah E; Major, Vincent J; Heacock, Laura; Moy, Linda; Schnabel, Freya; Pak, Linda M; Shen, Yiqiu; Geras, Krzysztof J
Pathology reports are considered the gold standard in medical research due to their comprehensive and accurate diagnostic information. Natural language processing (NLP) techniques have been developed to automate information extraction from pathology reports. However, existing studies suffer from two significant limitations. First, they typically frame their tasks as report classification, which restricts the granularity of extracted information. Second, they often fail to generalize to unseen reports due to variations in language, negation, and human error. To overcome these challenges, we propose a BERT (bidirectional encoder representations from transformers) named entity recognition (NER) system to extract key diagnostic elements from pathology reports. We also introduce four data augmentation methods to improve the robustness of our model. Trained and evaluated on 1438 annotated breast pathology reports, acquired from a large medical center in the United States, our BERT model trained with data augmentation achieves an entity F1-score of 0.916 on an internal test set, surpassing the BERT baseline (0.843). We further assessed the model's generalizability using an external validation dataset from the United Arab Emirates, where our model maintained satisfactory performance (F1-score 0.860). Our findings demonstrate that our NER systems can effectively extract fine-grained information from widely diverse medical reports, offering the potential for large-scale information extraction in a wide range of medical and AI research. We publish our code at https://github.com/nyukat/pathology_extraction.
PMCID:10350195
PMID: 37461545
CID: 5588752