Searched for: in-biosketch:true
person:moyl02
The Iodine Opportunity for Sustainable Radiology: Quantifying Supply Chain Strategies to Cut Contrast's Carbon and Costs
Nghiem, Derrik X; Yahyavi-Firouz-Abadi, Noushin; Hwang, Gloria L; Zafari, Zafar; Moy, Linda; Carlos, Ruth C; Doo, Florence X
PURPOSE/OBJECTIVE:To estimate economic and environmental reduction potential of iodinated contrast media (ICM) saving strategies, by examining supply chain data (from iodine extraction through administration) to inform a decision-making framework which can be tailored to local institutional priorities. METHODS:A 100 mL polymer vial of ICM was set as the standard reference case (SRC) for baseline comparison. To evaluate cost and emissions impacts, four ICM reduction strategies were modeled relative to this SRC baseline: vial optimization, hardware or software (AI-enabled) dose reduction, and multi-dose vial/injector systems. This analysis was then translated into a decision-making framework for radiologists to compare ICM strategies by cost, emissions, and operational feasibility. RESULTS:The supply chain life cycle of a 100 mL iodinated contrast vial produces 1,029 g CO2e, primarily from iodine extraction and clinical use. ICM-saving strategies varied widely in emissions reduction, ranging from 12%-50% nationally. Economically a 125% tariff could inflate national ICM-related costs to $11.9B, the ICM reduction strategy of AI-enhanced ICM systems could lower this expenditure to $2.7B. Institutional analysis reveals that the ICM savings from high-capital upfront investment strategies can offset their initial investment, highlighting important trade-offs for implementation decision-making. CONCLUSION/CONCLUSIONS:ICM is a major and modifiable contributor to healthcare carbon emissions. Depending on the utilized ICM-reduction strategy, emissions can be reduced by up to 53% and ICM-related costs by up to 50%. To guide implementation, we developed a decision-making framework that categorizes strategies based on environmental benefit, cost, and operational feasibility, enabling radiology leaders to align sustainability goals with institutional priorities.
PMID: 41046992
ISSN: 1558-349x
CID: 5951392
Evaluating Breast Cancer Intravoxel Incoherent Motion MRI Biomarkers across Software Platforms
Sigmund, Eric E; Cho, Gene Y; Basukala, Dibash; Sutton, Olivia M; Horvat, Joao V; Mikheev, Artem; Rusinek, Henry; Gilani, Nima; Li, Xiaochun; Babb, James S; Goldberg, Judith D; Pinker, Katja; Moy, Linda; Thakur, Sunitha B
Purpose To evaluate intravoxel incoherent motion (IVIM) biomarkers across different MRI vendors and software programs for breast cancer characterization in a two-site study. Materials and Methods This institutional review board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study included 106 patients (with 18 benign and 88 malignant lesions) who underwent bilateral diffusion-weighted imaging (DWI) between February 2009 and March 2013. DWI was performed using 1.5-T (n = 6) or 3-T MRI scanners from two vendors using single-shot spin-echo echo-planar imaging or twice-refocused, bipolar gradient single-shot turbo spin-echo readout with multiple b values between 0 and 1000 sec/mm2. IVIM parameters tissue diffusivity (Dt
PMID: 40910883
ISSN: 2638-616x
CID: 5936402
Best Practices for the Safe Use of Large Language Models and Other Generative AI in Radiology
Yi, Paul H; Haver, Hana L; Jeudy, Jean J; Kim, Woojin; Kitamura, Felipe C; Oluyemi, Eniola T; Smith, Andrew D; Moy, Linda; Parekh, Vishwa S
As large language models (LLMs) and other generative artificial intelligence (AI) models are rapidly integrated into radiology workflows, unique pitfalls threatening their safe use have emerged. Problems with AI are often identified only after public release, highlighting the need for preventive measures to mitigate negative impacts and ensure safe, effective deployment into clinical settings. This article summarizes best practices for the safe use of LLMs and other generative AI models in radiology, focusing on three key areas that can lead to pitfalls if overlooked: regulatory issues, data privacy, and bias. To address these areas and minimize risk to patients, radiologists must examine all potential failure modes and ensure vendor transparency. These best practices are based on the best available evidence and the experiences of leaders in the field. Ultimately, this article provides actionable guidelines for radiologists, radiology departments, and vendors using and integrating generative AI into radiology workflows, offering a framework to prevent these problems.
PMID: 40985835
ISSN: 1527-1315
CID: 5937652
Digital Twin Technology In Radiology
Aghamiri, Sara Sadat; Amin, Rada; Isavand, Pouria; Vahdati, Sanaz; Zeinoddini, Atefeh; Kitamura, Felipe C; Moy, Linda; Kline, Timothy
A digital twin is a computational model that provides a virtual representation of a specific physical object, system, or process and predicts its behavior at future time points. These simulation models form computational profiles for new diagnosis and prevention models. The digital twin is a concept borrowed from engineering. However, the rapid evolution of this technology has extended its application across various industries. In recent years, digital twins in healthcare have gained significant traction due to their potential to revolutionize medicine and drug development. In the context of radiology, digital twin technology can be applied in various areas, including optimizing medical device design, improving system performance, facilitating personalized medicine, conducting virtual clinical trials, and educating radiology trainees. Also, radiologic image data is a critical source of patient-specific measures that play a role in generating advanced intelligent digital twins. Generating a practical digital twin faces several challenges, including data availability, computational techniques, validation frameworks, and uncertainty quantification, all of which require collaboration among engineers, healthcare providers, and stakeholders. This review focuses on recent trends in digital twin technology and its intersection with radiology by reviewing applications, technological advancements, and challenges that need to be addressed for successful implementation in the field.
PMID: 40760263
ISSN: 2948-2933
CID: 5904882
Editorial Opportunities for Radiology Trainees: RSNA's Radiology: In Training Program [Editorial]
Guarnera, Alessia; Yilmaz, Enis C; Marrocchio, Cristina; Prodigios, Joice; Moy, Linda; Chernyak, Victoria
PMID: 40828046
ISSN: 1527-1315
CID: 5908902
Performance of Algorithms Submitted in the 2023 RSNA Screening Mammography Breast Cancer Detection AI Challenge
Chen, Yan; Partridge, George J W; Vazirabad, Maryam; Ball, Robyn L; Trivedi, Hari M; Kitamura, Felipe Campos; Frazer, Helen M L; Retson, Tara A; Yao, Luyan; Darker, Iain T; Kelil, Tatiana; Mongan, John; Mann, Ritse M; Moy, Linda
Background The 2023 RSNA Screening Mammography Breast Cancer Detection AI Challenge invited participants to develop artificial intelligence (AI) models capable of independently interpreting mammograms. Purpose To assess the performance of the submitted algorithms, explore the potential for improving performance by combining the best-performing AI algorithms, and investigate how performance was influenced by the demographic and clinical characteristics of the evaluation cohort. Materials and Methods A total of 1687 AI algorithms were submitted from November 2022 to February 2023. Of these, 1537 algorithms were assessed using an evaluation dataset from two sites-one in the United States and one in Australia. Cancer cases were identified at screening and confirmed with pathologic examination; noncancer cases were followed up for at least 1 year. Results for ensemble models of top algorithms were computed by recalling a case when any of the included algorithms indicated recall. Odds ratios (ORs) were used to investigate differences in AI performance when the dataset was stratified by clinical or demographic characteristics. Results The evaluation dataset consisted of 5415 women (median age, 59 years [IQR, 52-66 years]). Among the 1537 AI algorithms, the median recall rate, sensitivity, specificity, and positive predictive value (PPV) were 1.7%, 27.6%, 98.7%, and 36.9%, respectively. For the top-ranked algorithm, the recall rate, sensitivity, specificity, and PPV were 1.5%, 48.6%, 99.5%, and 64.6%, respectively. Ensemble models of the top 3 and top 10 algorithms had a sensitivity of 60.7% and 67.8%, respectively; the corresponding recall rates were 2.4% and 3.5%, and the corresponding specificities were 98.8% and 97.8%. Lower sensitivity was observed for the U.S. dataset than for the Australian dataset (top 3 ensemble model: 52.0% vs 68.1%; OR = 0.51; P = .02), and greater sensitivity was observed for invasive cancers than for noninvasive cancers (top 3 ensemble model: 68.0% vs 43.8%; OR = 2.73; P = .001). Conclusion The different AI algorithms identified different cancers during screening mammography, and ensemble models had increased sensitivity while maintaining low recall rates. © RSNA, 2025 Supplemental material is available for this article.
PMID: 40793948
ISSN: 1527-1315
CID: 5907052
Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification
Kline, Timothy L; Kitamura, Felipe; Warren, Daniel; Pan, Ian; Korchi, Amine M; Tenenholtz, Neil; Moy, Linda; Gichoya, Judy Wawira; Santos, Igor; Moradi, Kamyar; Avval, Atlas Haddadi; Alkhulaifat, Dana; Blumer, Steven L; Hwang, Misha Ysabel; Git, Kim-Ann; Shroff, Abishek; Stember, Joseph; Walach, Elad; Shih, George; Langer, Steve G
Recent advances in Artificial Intelligence (AI) methodologies and their application to medical imaging has led to an explosion of related research programs utilizing AI to produce state-of-the-art classification performance. Ideally, research culminates in dissemination of the findings in peer-reviewed journals. To date, acceptance or rejection criteria are often subjective; however, reproducible science requires reproducible review. The Machine Learning Education Sub-Committee of the Society for Imaging Informatics in Medicine (SIIM) has identified a knowledge gap and need to establish guidelines for reviewing these studies. This present work, written from the machine learning practitioner standpoint, follows a similar approach to our previous paper related to segmentation. In this series, the committee will address best practices to follow in AI-based studies and present the required sections with examples and discussion of requirements to make the studies cohesive, reproducible, accurate, and self-contained. This entry in the series focuses on image classification. Elements like dataset curation, data pre-processing steps, reference standard identification, data partitioning, model architecture, and training are discussed. Sections are presented as in a typical manuscript. The content describes the information necessary to ensure the study is of sufficient quality for publication consideration and, compared with other checklists, provides a focused approach with application to image classification tasks. The goal of this series is to provide resources to not only help improve the review process for AI-based medical imaging papers, but to facilitate a standard for the information that should be presented within all components of the research study.
PMID: 40465054
ISSN: 2948-2933
CID: 5862392
Correction: Checklist for Reproducibility of Deep Learning in Medical Imaging
Moassefi, Mana; Singh, Yashbir; Conte, Gian Marco; Khosravi, Bardia; Rouzrokh, Pouria; Vahdati, Sanaz; Safdar, Nabile; Moy, Linda; Kitamura, Felipe; Gentili, Amilcare; Lakhani, Paras; Kottler, Nina; Halabi, Safwan S; Yacoub, Joseph H; Hou, Yuankai; Younis, Khaled; Erickson, Bradley J; Krupinski, Elizabeth; Faghani, Shahriar
PMID: 39438367
ISSN: 2948-2933
CID: 5739842
Breast Arterial Calcifications on Mammography: A Review of the Literature
Rossi, Joanna; Cho, Leslie; Newell, Mary S; Venta, Luz A; Montgomery, Guy H; Destounis, Stamatia V; Moy, Linda; Brem, Rachel F; Parghi, Chirag; Margolies, Laurie R
Identifying systemic disease with medical imaging studies may improve population health outcomes. Although the pathogenesis of peripheral arterial calcification and coronary artery calcification differ, breast arterial calcification (BAC) on mammography is associated with cardiovascular disease (CVD), a leading cause of death in women. While professional society guidelines on the reporting or management of BAC have not yet been established, and assessment and quantification methods are not yet standardized, the value of reporting BAC is being considered internationally as a possible indicator of subclinical CVD. Furthermore, artificial intelligence (AI) models are being developed to identify and quantify BAC on mammography, as well as to predict the risk of CVD. This review outlines studies evaluating the association of BAC and CVD, introduces the role of preventative cardiology in clinical management, discusses reasons to consider reporting BAC, acknowledges current knowledge gaps and barriers to assessing and reporting calcifications, and provides examples of how AI can be utilized to measure BAC and contribute to cardiovascular risk assessment. Ultimately, reporting BAC on mammography might facilitate earlier mitigation of cardiovascular risk factors in asymptomatic women.
PMID: 40163666
ISSN: 2631-6129
CID: 5818782
Pitfalls and Best Practices in Evaluation of AI Algorithmic Biases in Radiology
Yi, Paul H; Bachina, Preetham; Bharti, Beepul; Garin, Sean P; Kanhere, Adway; Kulkarni, Pranav; Li, David; Parekh, Vishwa S; Santomartino, Samantha M; Moy, Linda; Sulam, Jeremias
Despite growing awareness of problems with fairness in artificial intelligence (AI) models in radiology, evaluation of algorithmic biases, or AI biases, remains challenging due to various complexities. These include incomplete reporting of demographic information in medical imaging datasets, variability in definitions of demographic categories, and inconsistent statistical definitions of bias. To guide the appropriate evaluation of AI biases in radiology, this article summarizes the pitfalls in the evaluation and measurement of algorithmic biases. These pitfalls span the spectrum from the technical (eg, how different statistical definitions of bias impact conclusions about whether an AI model is biased) to those associated with social context (eg, how different conventions of race and ethnicity impact identification or masking of biases). Actionable best practices and future directions to avoid these pitfalls are summarized across three key areas: (a) medical imaging datasets, (b) demographic definitions, and (c) statistical evaluations of bias. Although AI bias in radiology has been broadly reviewed in the recent literature, this article focuses specifically on underrecognized potential pitfalls related to the three key areas. By providing awareness of these pitfalls along with actionable practices to avoid them, exciting AI technologies can be used in radiology for the good of all people.
PMID: 40392092
ISSN: 1527-1315
CID: 5852522