Try a new search

Format these results:

Searched for:

person:sodicd01

in-biosketch:yes

Total Results:

228


Morphological Brain Analysis Using Ultra Low-Field MRI

Hsu, Peter; Marchetto, Elisa; Sodickson, Daniel K; Johnson, Patricia M; Veraart, Jelle
Ultra low-field (ULF) MRI is an accessible neuroimaging modality that can bridge healthcare disparities and advance population-level brain health research. However, the inherently low signal-to-noise ratio of ULF-MRI often necessitates reductions in spatial resolution and, combined with the field-dependency of MRI contrast, challenges the accurate extraction of clinically relevant brain morphology. We evaluate the current state of ULF-MRI brain volumetry utilizing techniques for enhancing spatial resolution and leveraging recent advancements in brain segmentation. This is based on the agreement between ULF and corresponding high-field (HF) MRI brain volumes, and test-retest repeatability for multiple ULF scans. In this study, we find that accurate brain volumes can be measured from ULF-MRIs when combining orthogonal imaging directions for T2-weighted images to form a higher resolution image volume. We also demonstrate that not all orthogonal imaging directions contribute equally to volumetric accuracy and provide a recommended scan protocol given the constraints of the current technology.
PMCID:12207323
PMID: 40586128
ISSN: 1097-0193
CID: 5887542

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies

Umapathy, Lavanya; Johnson, Patricia M; Dutt, Tarun; Tong, Angela; Chopra, Sumit; Sodickson, Daniel K; Chandarana, Hersh
OBJECTIVES/OBJECTIVE:Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. MATERIALS AND METHODS/METHODS:This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. RESULTS:On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. CONCLUSIONS:Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.
PMID: 40586610
ISSN: 1536-0210
CID: 5887552

Multimodal generative AI for interpreting 3D medical images and videos

Lee, Jung-Oh; Zhou, Hong-Yu; Berzin, Tyler M; Sodickson, Daniel K; Rajpurkar, Pranav
This perspective proposes adapting video-text generative AI to 3D medical imaging (CT/MRI) and medical videos (endoscopy/laparoscopy) by treating 3D images as videos. The approach leverages modern video models to analyze multiple sequences simultaneously and provide real-time AI assistance during procedures. The paper examines medical imaging's unique characteristics (synergistic information, metadata, and world model), outlines applications in automated reporting, case retrieval, and education, and addresses challenges of limited datasets, benchmarks, and specialized training.
PMCID:12075794
PMID: 40360694
ISSN: 2398-6352
CID: 5844212

Prostate Cancer Risk Stratification and Scan Tailoring Using Deep Learning on Abbreviated Prostate MRI

Johnson, Patricia M; Dutt, Tarun; Ginocchio, Luke A; Saimbhi, Amanpreet Singh; Umapathy, Lavanya; Block, Kai Tobias; Sodickson, Daniel K; Chopra, Sumit; Tong, Angela; Chandarana, Hersh
BACKGROUND:MRI plays a critical role in prostate cancer (PCa) detection and management. Bi-parametric MRI (bpMRI) offers a faster, contrast-free alternative to multi-parametric MRI (mpMRI). Routine use of mpMRI for all patients may not be necessary, and a tailored imaging approach (bpMRI or mpMRI) based on individual risk might optimize resource utilization. PURPOSE/OBJECTIVE:To develop and evaluate a deep learning (DL) model for classifying clinically significant PCa (csPCa) using bpMRI and to assess its potential for optimizing MRI protocol selection by recommending the additional sequences of mpMRI only when beneficial. STUDY TYPE/METHODS:Retrospective and prospective. POPULATION/METHODS:The DL model was trained and validated on 26,129 prostate MRI studies. A retrospective cohort of 151 patients (mean age 65 ± 8) with ground-truth verification from biopsy, prostatectomy, or long-term follow-up, alongside a prospective cohort of 142 treatment-naïve patients (mean age 65 ± 9) undergoing bpMRI, was evaluated. FIELD STRENGTH/SEQUENCE/UNASSIGNED:3 T, Turbo-spin echo T2-weighted imaging (T2WI) and single shot EPI diffusion-weighted imaging (DWI). ASSESSMENT/RESULTS:The DL model, based on a 3D ResNet-50 architecture, classified csPCa using PI-RADS ≥ 3 and Gleason ≥ 7 as outcome measures. The model was evaluated on a prospective cohort labeled by consensus of three radiologists and a retrospective cohort with ground truth verification based on biopsy or long-term follow-up. Real-time inference was tested on an automated MRI workflow, providing classification results directly at the scanner. STATISTICAL TESTS/METHODS:AUROC with 95% confidence intervals (CI) was used to evaluate model performance. RESULTS:In the prospective cohort, the model achieved an AUC of 0.83 (95% CI: 0.77-0.89) for PI-RADS ≥ 3 classification, with 93% sensitivity and 54% specificity. In the retrospective cohort, the model achieved an AUC of 0.86 (95% CI: 0.80-0.91) for Gleason ≥ 7 classification, with 93% sensitivity and 62% specificity. Real-time implementation demonstrated a processing latency of 14-16 s for protocol recommendations. DATA CONCLUSION/CONCLUSIONS:The proposed DL model identifies csPCa using bpMRI and integrates it into clinical workflows. EVIDENCE LEVEL/METHODS:1. TECHNICAL EFFICACY/UNASSIGNED:Stage 2.
PMID: 40259798
ISSN: 1522-2586
CID: 5830062

T1 Relaxation-Enhanced Steady-State Acquisition With Radial k-Space Sampling: A Novel Family of Pulse Sequences for Motion-Robust Volumetric T1-Weighted MRI With Improved Lesion Conspicuity

Zi, Ruoxun; Edelman, Robert R; Maier, Christoph; Keerthivasan, Mahesh; Lattanzi, Riccardo; Sodickson, Daniel K; Block, Kai Tobias
OBJECTIVES/OBJECTIVE:Magnetization-prepared rapid gradient-echo (MP-RAGE) sequences are routinely acquired for brain exams, providing high conspicuity for enhancing lesions. Vessels, however, also appear bright, which can complicate the detection of small lesions. T1RESS (T1 relaxation-enhanced steady-state) sequences have been proposed as an alternative to MP-RAGE, offering improved lesion conspicuity and suppression of blood vessels. This work aims to evaluate the performance of radial T1RESS variants for motion-robust contrast-enhanced brain MRI. MATERIALS AND METHODS/METHODS:Radial stack-of-stars sampling was implemented for steady-state free-precession-based rapid T1RESS acquisition with saturation recovery preparation. Three variants were developed using a balanced steady-state free-precession readout (bT1RESS), an unbalanced fast imaging steady precession (FISP) readout (uT1RESS-FISP), and an unbalanced reversed FISP readout (uT1RESS-PSIF). Image contrast was evaluated in numerical simulations and phantom experiments. The motion robustness of radial T1RESS was demonstrated with a motion phantom. Four patients and six healthy volunteers were scanned at 3 T and 0.55 T. Extensions were developed combining T1RESS with GRASP for dynamic imaging, with GRAPPA for accelerated scans, and with Dixon for fat/water separation. RESULTS:In simulations and phantom scans, uT1RESS-FISP provided higher signal intensity for regions with lower T1 values (<500 ms) compared with MP-RAGE. In motion experiments, radial uT1RESS-FISP showed fewer artifacts than MP-RAGE and Cartesian uT1RESS-FISP. In patients, both unbalanced uT1RESS variants provided higher lesion conspicuity than MP-RAGE. Blood vessels appeared bright with MP-RAGE, gray with uT1RESS-FISP, and dark with uT1RESS-PSIF. At 0.55 T, bT1RESS provided high signal-to-noise ratio T1-weighted images without banding artifacts. Lastly, dynamic T1RESS images with a temporal resolution of 10.14 seconds/frame were generated using the GRASP algorithm. CONCLUSIONS:Radial T1RESS sequences offer improved lesion conspicuity and motion robustness and enable dynamic imaging for contrast-enhanced brain MRI. Both uT1RESS variants showed higher tumor-to-brain contrast than MP-RAGE and may find application as alternative techniques for imaging uncooperative patients with small brain lesions.
PMID: 40184541
ISSN: 1536-0210
CID: 5819432

Accelerating multi-coil MR image reconstruction using weak supervision

Atalık, Arda; Chopra, Sumit; Sodickson, Daniel K
Deep-learning-based MR image reconstruction in settings where large fully sampled dataset collection is infeasible requires methods that effectively use both under-sampled and fully sampled datasets. This paper evaluates a weakly supervised, multi-coil, physics-guided approach to MR image reconstruction, leveraging both dataset types, to improve both the quality and robustness of reconstruction. A physics-guided end-to-end variational network (VarNet) is pretrained in a self-supervised manner using a 4
PMID: 39382814
ISSN: 1352-8661
CID: 5730182

DeepEMC-T2 mapping: Deep learning-enabled T2 mapping based on echo modulation curve modeling

Pei, Haoyang; Shepherd, Timothy M; Wang, Yao; Liu, Fang; Sodickson, Daniel K; Ben-Eliezer, Noam; Feng, Li
PURPOSE/OBJECTIVE:maps from fewer echoes. METHODS:mapping was evaluated in seven experiments. RESULTS:estimation. CONCLUSIONS:estimation from fewer echoes allows for increased volumetric coverage and/or higher slice resolution without prolonging total scan times.
PMCID:11436299
PMID: 39129209
ISSN: 1522-2594
CID: 5706952

The perils and the promise of whole-body MRI: why we may be debating the wrong things

Sodickson, Daniel K
PMID: 39251175
ISSN: 1558-349x
CID: 5690082

An experimental system for detection and localization of hemorrhage using ultra-wideband microwaves with deep learning

Hedayati, Eisa; Safari, Fatemeh; Verghese, George; Ciancia, Vito R; Sodickson, Daniel K; Dehkharghani, Seena; Alon, Leeor
Stroke is a leading cause of mortality and disability. Emergent diagnosis and intervention are critical, and predicated upon initial brain imaging; however, existing clinical imaging modalities are generally costly, immobile, and demand highly specialized operation and interpretation. Low-energy microwaves have been explored as a low-cost, small form factor, fast, and safe probe for tissue dielectric properties measurements, with both imaging and diagnostic potential. Nevertheless, challenges inherent to microwave reconstruction have impeded progress, hence conduction of microwave imaging remains an elusive scientific aim. Herein, we introduce a dedicated experimental framework comprising a robotic navigation system to translate blood-mimicking phantoms within a human head model. An 8-element ultra-wideband array of modified antipodal Vivaldi antennas was developed and driven by a two-port vector network analyzer spanning 0.6-9.0 GHz at an operating power of 1 mW. Complex scattering parameters were measured, and dielectric signatures of hemorrhage were learned using a dedicated deep neural network for prediction of hemorrhage classes and localization. An overall sensitivity and specificity for detection >0.99 was observed, with Rayleigh mean localization error of 1.65 mm. The study establishes the feasibility of a robust experimental model and deep learning solution for ultra-wideband microwave stroke detection.
PMID: 39242634
ISSN: 2731-3395
CID: 5688452

Preliminary Experience with Three Alternative Motion Sensors for 0.55 Tesla MR Imaging

Tibrewala, Radhika; Brantner, Douglas; Brown, Ryan; Pancoast, Leanna; Keerthivasan, Mahesh; Bruno, Mary; Block, Kai Tobias; Madore, Bruno; Sodickson, Daniel K; Collins, Christopher M
Due to limitations in current motion tracking technologies and increasing interest in alternative sensors for motion tracking both inside and outside the MRI system, in this study we share our preliminary experience with three alternative sensors utilizing diverse technologies and interactions with tissue to monitor motion of the body surface, respiratory-related motion of major organs, and non-respiratory motion of deep-seated organs. These consist of (1) a Pilot-Tone RF transmitter combined with deep learning algorithms for tracking liver motion, (2) a single-channel ultrasound transducer with deep learning for monitoring bladder motion, and (3) a 3D Time-of-Flight camera for observing the motion of the anterior torso surface. Additionally, we demonstrate the capability of these sensors to simultaneously capture motion data outside the MRI environment, which is particularly relevant for procedures like radiation therapy, where motion status could be related to previously characterized cyclical anatomical data. Our findings indicate that the ultrasound sensor can track motion in deep-seated organs (bladder) as well as respiratory-related motion. The Time-of-Flight camera offers ease of interpretation and performs well in detecting surface motion (respiration). The Pilot-Tone demonstrates efficacy in tracking bulk respiratory motion and motion of major organs (liver). Simultaneous use of all three sensors could provide complementary motion information outside the MRI bore, providing potential value for motion tracking during position-sensitive treatments such as radiation therapy.
PMCID:11207459
PMID: 38931494
ISSN: 1424-8220
CID: 5698062