Searched for: in-biosketch:yes
person:sodicd01
Overcoming MRI accessibility barriers in cancer imaging with cutting-edge solutions [Editorial]
Chandarana, Hersh; Sodickson, Daniel K
PMCID:12599087
PMID: 41214743
ISSN: 1470-7330
CID: 5966572
First-Order Spatial Encoding Simulations for Improved Accuracy in the Presence of Strong B0 and Gradient Field Variations
Tibrewala, Radhika; Collins, Christopher M; Mallett, Michael; Vom Endt, Axel; Sodickson, Daniel K; Assländer, Jakob
PURPOSE/OBJECTIVE: METHODS:Like many other MRI simulators, ours discretizes magnetic fields in space. However, we extend the MR signal simulation at each grid point from the 0th-order approximation, which assumes piecewise constant fields, to a 1st-order approximation, which assumes piecewise linear fields. We solve the signal equation by analytically integrating over each grid cube, assuming linear field variations, and then summing over all cubes. We provide analytical integrals for several pulse sequences. RESULTS:The 1st-order approximation captures strongly varying fields and associated intravoxel dephasing more accurately, avoiding severe "ringing" artifacts present in the usual 0th-order simulations. This enables simulations on a much coarser grid, facilitating computational feasibility. CONCLUSION/CONCLUSIONS:The first-order simulator enables the evaluation of unconventional scanner designs with strongly varying magnetic fields.
PMID: 41145956
ISSN: 1522-2594
CID: 5961042
T1 Relaxation-Enhanced Steady-State Acquisition With Radial k-Space Sampling: A Novel Family of Pulse Sequences for Motion-Robust Volumetric T1-Weighted MRI With Improved Lesion Conspicuity
Zi, Ruoxun; Edelman, Robert R; Maier, Christoph; Keerthivasan, Mahesh; Lattanzi, Riccardo; Sodickson, Daniel K; Block, Kai Tobias
OBJECTIVES/OBJECTIVE:Magnetization-prepared rapid gradient-echo (MP-RAGE) sequences are routinely acquired for brain exams, providing high conspicuity for enhancing lesions. Vessels, however, also appear bright, which can complicate the detection of small lesions. T1RESS (T1 relaxation-enhanced steady-state) sequences have been proposed as an alternative to MP-RAGE, offering improved lesion conspicuity and suppression of blood vessels. This work aims to evaluate the performance of radial T1RESS variants for motion-robust contrast-enhanced brain MRI. MATERIALS AND METHODS/METHODS:Radial stack-of-stars sampling was implemented for steady-state free-precession-based rapid T1RESS acquisition with saturation recovery preparation. Three variants were developed using a balanced steady-state free-precession readout (bT1RESS), an unbalanced fast imaging steady precession (FISP) readout (uT1RESS-FISP), and an unbalanced reversed FISP readout (uT1RESS-PSIF). Image contrast was evaluated in numerical simulations and phantom experiments. The motion robustness of radial T1RESS was demonstrated with a motion phantom. Four patients and six healthy volunteers were scanned at 3 T and 0.55 T. Extensions were developed combining T1RESS with GRASP for dynamic imaging, with GRAPPA for accelerated scans, and with Dixon for fat/water separation. RESULTS:In simulations and phantom scans, uT1RESS-FISP provided higher signal intensity for regions with lower T1 values (<500 ms) compared with MP-RAGE. In motion experiments, radial uT1RESS-FISP showed fewer artifacts than MP-RAGE and Cartesian uT1RESS-FISP. In patients, both unbalanced uT1RESS variants provided higher lesion conspicuity than MP-RAGE. Blood vessels appeared bright with MP-RAGE, gray with uT1RESS-FISP, and dark with uT1RESS-PSIF. At 0.55 T, bT1RESS provided high signal-to-noise ratio T1-weighted images without banding artifacts. Lastly, dynamic T1RESS images with a temporal resolution of 10.14 seconds/frame were generated using the GRASP algorithm. CONCLUSIONS:Radial T1RESS sequences offer improved lesion conspicuity and motion robustness and enable dynamic imaging for contrast-enhanced brain MRI. Both uT1RESS variants showed higher tumor-to-brain contrast than MP-RAGE and may find application as alternative techniques for imaging uncooperative patients with small brain lesions.
PMID: 40184541
ISSN: 1536-0210
CID: 5819432
Prostate Cancer Risk Stratification and Scan Tailoring Using Deep Learning on Abbreviated Prostate MRI
Johnson, Patricia M; Dutt, Tarun; Ginocchio, Luke A; Saimbhi, Amanpreet Singh; Umapathy, Lavanya; Block, Kai Tobias; Sodickson, Daniel K; Chopra, Sumit; Tong, Angela; Chandarana, Hersh
BACKGROUND:MRI plays a critical role in prostate cancer (PCa) detection and management. Bi-parametric MRI (bpMRI) offers a faster, contrast-free alternative to multi-parametric MRI (mpMRI). Routine use of mpMRI for all patients may not be necessary, and a tailored imaging approach (bpMRI or mpMRI) based on individual risk might optimize resource utilization. PURPOSE/OBJECTIVE:To develop and evaluate a deep learning (DL) model for classifying clinically significant PCa (csPCa) using bpMRI and to assess its potential for optimizing MRI protocol selection by recommending the additional sequences of mpMRI only when beneficial. STUDY TYPE/METHODS:Retrospective and prospective. POPULATION/METHODS:The DL model was trained and validated on 26,129 prostate MRI studies. A retrospective cohort of 151 patients (mean age 65 ± 8) with ground-truth verification from biopsy, prostatectomy, or long-term follow-up, alongside a prospective cohort of 142 treatment-naïve patients (mean age 65 ± 9) undergoing bpMRI, was evaluated. FIELD STRENGTH/SEQUENCE/UNASSIGNED:3 T, Turbo-spin echo T2-weighted imaging (T2WI) and single shot EPI diffusion-weighted imaging (DWI). ASSESSMENT/RESULTS:The DL model, based on a 3D ResNet-50 architecture, classified csPCa using PI-RADS ≥ 3 and Gleason ≥ 7 as outcome measures. The model was evaluated on a prospective cohort labeled by consensus of three radiologists and a retrospective cohort with ground truth verification based on biopsy or long-term follow-up. Real-time inference was tested on an automated MRI workflow, providing classification results directly at the scanner. STATISTICAL TESTS/METHODS:AUROC with 95% confidence intervals (CI) was used to evaluate model performance. RESULTS:In the prospective cohort, the model achieved an AUC of 0.83 (95% CI: 0.77-0.89) for PI-RADS ≥ 3 classification, with 93% sensitivity and 54% specificity. In the retrospective cohort, the model achieved an AUC of 0.86 (95% CI: 0.80-0.91) for Gleason ≥ 7 classification, with 93% sensitivity and 62% specificity. Real-time implementation demonstrated a processing latency of 14-16 s for protocol recommendations. DATA CONCLUSION/CONCLUSIONS:The proposed DL model identifies csPCa using bpMRI and integrates it into clinical workflows. EVIDENCE LEVEL/METHODS:1. TECHNICAL EFFICACY/UNASSIGNED:Stage 2.
PMID: 40259798
ISSN: 1522-2586
CID: 5830062
Morphological Brain Analysis Using Ultra Low-Field MRI
Hsu, Peter; Marchetto, Elisa; Sodickson, Daniel K; Johnson, Patricia M; Veraart, Jelle
Ultra low-field (ULF) MRI is an accessible neuroimaging modality that can bridge healthcare disparities and advance population-level brain health research. However, the inherently low signal-to-noise ratio of ULF-MRI often necessitates reductions in spatial resolution and, combined with the field-dependency of MRI contrast, challenges the accurate extraction of clinically relevant brain morphology. We evaluate the current state of ULF-MRI brain volumetry utilizing techniques for enhancing spatial resolution and leveraging recent advancements in brain segmentation. This is based on the agreement between ULF and corresponding high-field (HF) MRI brain volumes, and test-retest repeatability for multiple ULF scans. In this study, we find that accurate brain volumes can be measured from ULF-MRIs when combining orthogonal imaging directions for T2-weighted images to form a higher resolution image volume. We also demonstrate that not all orthogonal imaging directions contribute equally to volumetric accuracy and provide a recommended scan protocol given the constraints of the current technology.
PMCID:12207323
PMID: 40586128
ISSN: 1097-0193
CID: 5887542
Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies
Umapathy, Lavanya; Johnson, Patricia M; Dutt, Tarun; Tong, Angela; Chopra, Sumit; Sodickson, Daniel K; Chandarana, Hersh
OBJECTIVES/OBJECTIVE:Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. MATERIALS AND METHODS/METHODS:This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. RESULTS:On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. CONCLUSIONS:Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.
PMID: 40586610
ISSN: 1536-0210
CID: 5887552
Multimodal generative AI for interpreting 3D medical images and videos
Lee, Jung-Oh; Zhou, Hong-Yu; Berzin, Tyler M; Sodickson, Daniel K; Rajpurkar, Pranav
This perspective proposes adapting video-text generative AI to 3D medical imaging (CT/MRI) and medical videos (endoscopy/laparoscopy) by treating 3D images as videos. The approach leverages modern video models to analyze multiple sequences simultaneously and provide real-time AI assistance during procedures. The paper examines medical imaging's unique characteristics (synergistic information, metadata, and world model), outlines applications in automated reporting, case retrieval, and education, and addresses challenges of limited datasets, benchmarks, and specialized training.
PMCID:12075794
PMID: 40360694
ISSN: 2398-6352
CID: 5844212
Accelerating multi-coil MR image reconstruction using weak supervision
Atalık, Arda; Chopra, Sumit; Sodickson, Daniel K
Deep-learning-based MR image reconstruction in settings where large fully sampled dataset collection is infeasible requires methods that effectively use both under-sampled and fully sampled datasets. This paper evaluates a weakly supervised, multi-coil, physics-guided approach to MR image reconstruction, leveraging both dataset types, to improve both the quality and robustness of reconstruction. A physics-guided end-to-end variational network (VarNet) is pretrained in a self-supervised manner using a 4
PMID: 39382814
ISSN: 1352-8661
CID: 5730182
DeepEMC-T2 mapping: Deep learning-enabled T2 mapping based on echo modulation curve modeling
Pei, Haoyang; Shepherd, Timothy M; Wang, Yao; Liu, Fang; Sodickson, Daniel K; Ben-Eliezer, Noam; Feng, Li
PURPOSE/OBJECTIVE:maps from fewer echoes. METHODS:mapping was evaluated in seven experiments. RESULTS:estimation. CONCLUSIONS:estimation from fewer echoes allows for increased volumetric coverage and/or higher slice resolution without prolonging total scan times.
PMCID:11436299
PMID: 39129209
ISSN: 1522-2594
CID: 5706952
The perils and the promise of whole-body MRI: why we may be debating the wrong things
Sodickson, Daniel K
PMID: 39251175
ISSN: 1558-349x
CID: 5690082