Searched for: in-biosketch:yes
person:lf959
Self-Supervised Noise Adaptive MRI Denoising via Repetition to Repetition (Rep2Rep) Learning
Janjušević, Nikola; Chen, Jingjia; Ginocchio, Luke; Bruno, Mary; Huang, Yuhui; Wang, Yao; Chandarana, Hersh; Feng, Li
PURPOSE/OBJECTIVE: METHODS:Rep2Rep learning extends the Noise2Noise framework by training a neural network on two repeated MRI acquisitions, using one repetition as input and another as target, without requiring ground-truth data. It incorporates noise-adaptive training, enabling denoising generalization across varying noise-levels and flexible inference with any number of repetitions. Performance was evaluated on both synthetic noisy Brain MRI and 0.55T Prostate MRI data, and compared against supervised learning and Monte Carlo Stein's Unbiased Risk Estimator (MC-SURE). RESULTS:Rep2Rep learning outperforms MC-SURE on both synthetic and 0.55T MRI datasets. On synthetic Brain data, it achieved denoising quality comparable to supervised learning and surpassed MC-SURE, particularly in preserving structural details and reducing residual noise. On the 0.55T Prostate MRI data a reader study showed that Rep2Rep-denoised 2-average images outperformed 8-average noisy images. Rep2Rep demonstrated robustness to noise-level discrepancies between training and inference, supporting its practical implementation. CONCLUSION/CONCLUSIONS:Rep2Rep learning offers an effective self-supervised denoising for low-field MRI by leveraging routinely acquired multi-repetition data. Its noise-adaptivity enables generalization to different SNR regimes without clean reference images. This makes Rep2Rep learning a promising tool for improving image quality and scan efficiency in low-field MRI.
PMID: 41208014
ISSN: 1522-2594
CID: 5966372
Pole-To-Pole 3D Radial Trajectory Designs Improve Image Quality and Quantitative Parametric Mapping in the Brain and Heart
Peper, Eva S; Bauman, Grzegorz; Tagliabue, Matteo; Açikgöz, Berk C; Plähn, Nils M J; Mackowiak, Adèle L C; Safarkhanlo, Yasaman; Woods, Joseph G; Piccini, Davide; Feng, Li; Roy, Christopher W; Bieri, Oliver; Bastiaansen, Jessica A M
PURPOSE/OBJECTIVE:To design 3D radial spiral phyllotaxis trajectories aimed at removing phase inconsistencies, improving image quality, and enhancing parametric mapping accuracy by acquiring nearly opposing spokes starting from both hemispheres in 3D radial k-space. METHODS:Two 3D radial trajectories, pole-to-pole and continuous spiral phyllotaxis, were developed and implemented on a 3T MRI scanner in a phase-cycled balanced steady-state free precession (bSSFP) and a spoiled gradient-echo (GRE) sequence. Image quality and k-space center phase variations were evaluated in a spherical phantom using the original and new radial phyllotaxis designs. T1/T2 was quantified and compared using phase-cycled bSSFP data acquired with the new radial trajectory designs, as well as the original phyllotaxis trajectory and a Cartesian trajectory as references, in both an MRI system phantom and the brains of three healthy volunteers. ECG-triggered whole-heart GRE data were acquired using the original and pole-to-pole phyllotaxis trajectories in three healthy volunteers and compared for image quality improvement. RESULTS:All 3D radial trajectory designs showed variations in the k-space center phase depending on the orientation of the readout spokes. Image quality improved when using the pole-to-pole and continuous phyllotaxis over the original trajectory. Scans using the original trajectory had higher T1/T2 estimation errors in comparison to the new trajectories and the Cartesian trajectory. The pole-to-pole and continuous trajectories improved T1/T2 maps of the brain and image quality for all cardiac images. CONCLUSION/CONCLUSIONS:Acquiring nearly opposing spokes in 3D radial trajectory designs compensates phase inconsistencies without requiring additional corrections, which improves quantitative imaging and anatomical visualizations.
PMID: 41486091
ISSN: 1522-2594
CID: 5980512
Self-Supervised Joint Reconstruction and Denoising of T2-Weighted PROPELLER MRI of the Lung at 0.55T
Chen, Jingjia; Pei, Haoyang; Maier, Christoph; Bruno, Mary; Wen, Qiuting; Shin, Seon-Hi; Moore, William; Chandarana, Hersh; Feng, Li
PURPOSE/OBJECTIVE:To improve 0.55T T2-weighted PROPELLER lung MRI by developing a self-supervised framework for joint reconstruction and denoising. METHODS:T2-weighted 0.55T lung MRI datasets from 44 patients with prior COVID-19 infection were used. Each PROPELLER blade was split along the readout direction into two disjoint subsets: one subset for training an unrolled network, and the other for loss calculation. Following the Noise2Noise paradigm, this framework split k-space into two subsets with independent, matched noise but identical underlying signal, enabling joint reconstruction and denoising without external training references. For comparison, coil-wise Marchenko-Pastur Principal Component Analysis (MPPCA) denoising followed by parallel imaging reconstruction was performed. The reconstructed images were evaluated by two experienced chest radiologists. RESULTS:The self-supervised model generated lung images with improved clarity, better delineation of parenchymal and airway structures, and maintained high fidelity in cases with available CT references. In addition, the proposed framework also enabled further reduction of scan time by reconstructing images with adequate diagnostic quality from only half the number of blades. The reader study confirmed that the proposed method outperformed MPPCA across all categories (Wilcoxon signed-rank test, p < 0.001), with moderate inter-reader agreement (weighted Cohen's kappa = 0.55; percentage of exact and within ±1 point agreement = 91%). CONCLUSION/CONCLUSIONS:By leveraging the intrinsic data redundancy in PROPELLER sampling and extending the Noise2Noise concept, the proposed self-supervised framework enabled simultaneous reconstruction and denoising of lung images at 0.55T to address the low-SNR challenge at low-field. It holds great potential for broad use in other low-field MRI applications.
PMID: 41387224
ISSN: 1522-2594
CID: 5978122
Multisession Longitudinal Dynamic MRI Incorporating Patient-Specific Prior Image Information Across Time
Chen, Jingjia; Chandarana, Hersh; Sodickson, Daniel K; Feng, Li
Serial Magnetic Resonance Imaging (MRI) exams are often performed in clinical practice, offering shared anatomical and motion information across imaging sessions. However, existing reconstruction methods process each session independently without leveraging this valuable longitudinal information. In this work, we propose a novel concept of longitudinal dynamic MRI, which incorporates patient-specific prior images to exploit temporal correlations across sessions. This framework enables progressive acceleration of data acquisition and reduction of scan time as more imaging sessions become available. The concept is demonstrated using the 4D Golden-angle RAdial Sparse Parallel (GRASP) MRI, a state-of-the-art dynamic imaging technique. Longitudinal reconstruction is performed by concatenating multi-session time-resolved 4D GRASP datasets into an extended dynamic series, followed by a low-rank subspace-based reconstruction algorithm. A series of experiments were conducted to evaluate the feasibility and performance of the proposed method. Results show that longitudinal 4D GRASP reconstruction consistently outperforms standard single-session reconstruction in image quality, while preserving inter-session variations. The approach demonstrated robustness to changes in anatomy, imaging intervals, and body contour, highlighting its potential for improving imaging efficiency and consistency in longitudinal MRI applications. More generally, this work suggests a new context-aware imaging paradigm in which the more we see a patient, the faster we can image.
PMCID:12310133
PMID: 40740507
ISSN: 2331-8422
CID: 5981862
Deep learning-based generation of DSC MRI parameter maps using DCE MRI data
Pei, Haoyang; Lyu, Yixuan; Lambrecht, Sebastian; Lin, Doris; Feng, Li; Liu, Fang; Nyquist, Paul; van Zijl, Peter; Knutsson, Linda; Xu, Xiang
BACKGROUND AND PURPOSE/OBJECTIVE:Perfusion and perfusion-related parameter maps obtained using dynamic susceptibility contrast (DSC) MRI and dynamic contrast enhanced (DCE) MRI are both useful for clinical diagnosis and research. However, using both DSC and DCE MRI in the same scan session requires two doses of gadolinium contrast agent. The objective was to develop deep-learning based methods to synthesize DSC-derived parameter maps from DCE MRI data. MATERIALS AND METHODS/METHODS:Independent analysis of data collected in previous studies was performed. The database contained sixty-four participants, including patients with and without brain tumors. The reference parameter maps were measured from DSC MRI performed following DCE MRI. A conditional generative adversarial network (cGAN) was designed and trained to generate synthetic DSC-derived maps from DCE MRI data. The median parameter values and distributions between synthetic and real maps were compared using linear regression and Bland-Altman plots. RESULTS:Using cGAN, realistic DSC parameter maps could be synthesized from DCE MRI data. For controls without brain tumors, the synthesized parameters had distributions similar to the ground truth values. For patients with brain tumors, the synthesized parameters in the tumor region correlated linearly with the ground truth values. In addition, areas not visible due to susceptibility artifacts in real DSC maps could be visualized using DCE-derived DSC maps. CONCLUSIONS:DSC-derived parameter maps could be synthesized using DCE MRI data, including susceptibility-artifact-prone regions. This shows the potential to obtain both DSC and DCE parameter maps from DCE MRI using a single dose of contrast agent. ABBREVIATIONS/BACKGROUND:=plasma volume.
PMID: 40194853
ISSN: 1936-959x
CID: 5823672
Accelerated Abdominal MRI: A Review of Current Methods and Applications
Feng, Li; Chandarana, Hersh
MRI is widely used for the diagnosis and management of various abdominal diseases involving organs such as the liver, pancreas, and kidneys. However, one major limitation of MRI is its relatively slow imaging speed compared to other modalities. In addition, respiratory motion poses a significant challenge in abdominal MRI, often requiring patients to hold their breath multiple times during an exam. This requirement can be particularly challenging for sick, elderly, and pediatric patients, who may have reduced breath-holding capacity. As a result, rapid imaging plays an important role in routine clinical abdominal MRI exams. Accelerated data acquisition not only reduces overall exam time but also shortens breath-hold durations, thereby improving patient comfort and compliance. Over the past decade, significant advancements in rapid MRI have led to the development of various accelerated imaging techniques for routine clinical use. These methods improve abdominal MRI by enhancing imaging speed, motion compensation, and overall image quality. Integrating these techniques into clinical practice also enables new applications that were previously challenging. This paper provides a concise yet comprehensive overview of rapid imaging techniques applicable to abdominal MRI and discusses their advantages, limitations, and potential clinical applications. By the end of this review, readers are expected to learn the latest advances in accelerated abdominal MRI and explore new frontiers in this evolving field. Evidence Level: N/A Technical Efficacy: Stage 5.
PMID: 40103292
ISSN: 1522-2586
CID: 5813342
Visual-language artificial intelligence system for knee radiograph diagnosis and interpretation: a collaborative system with humans
He, Xingxin; Stewart, Zachary E; Crasta, Nikitha; Nukala, Varun; Jang, Albert; Zhou, Zhaoye; Kijowski, Richard; Feng, Li; Peng, Wei; van der Heijden, Rianne A; Lee, Kenneth S; Li, Shasha; Tanaka, Miho J; Liu, Fang
BACKGROUND/UNASSIGNED:Large language models (LLMs) have shown promising abilities in text-based clinical tasks but they do not inherently interpret medical images such as knee radiographs. PURPOSE/UNASSIGNED:To develop a human-artificial intelligence interactive diagnostic approach, named radiology generative pretrained transformer (RadGPT), aimed at assisting and synergizing with human users for the interpretation of knee radiological images. MATERIALS AND METHODS/UNASSIGNED:A total of 22 512 knee roentgen ray images and reports were retrieved from Massachusetts General Hospital; 80% of these were used for model training and 10% were used for model testing and validation, respectively. Fifteen diagnostic imaging features (eg, osteoarthritis, effusion, joint space narrowing, osteophyte) were selected to label images based on their high frequency and clinical relevance in the retrieved official reports. Area under the curve scores were calculated for each feature to assess the diagnostic performance. To evaluate the quality of the generated medical text, historical clinical reports were used as the reference text. Several metrics for text generation tasks are applied, including BiLingual Evaluation Understudy, Recall-Oriented Understudy for Gisting Evaluation, Metric for Evaluation of Translation with Explicit Ordering, and Semantic Propositional Image Caption Evaluation. RESULTS/UNASSIGNED:RadGPT, in collaboration with human users, achieved area under the curve scores ranging from 0.76 for osteonecrosis to 0.91 for arthroplasty across 15 diagnostic categories for knee conditions. Compared with the baseline LLM method, RadGPT achieved higher scores, specifically 0.18 in BiLingual Evaluation Understudy score, 0.30 in Recall-Oriented Understudy for Gisting Evaluation-L, 0.10 in Metric for Evaluation of Translation with Explicit Ordering, and 0.15 in Semantic Propositional Image Caption Evaluation, which is significantly higher than the baseline LLM method, demonstrating good linguistic overlap and clinical consistency with the reference reports. CONCLUSION/UNASSIGNED:RadGPT has achieved advanced results in knee roentgen ray image feature recognition, illustrating the potential of LLMs in medical image interpretation. The study establishes a training protocol for developing artificial intelligence-assisted tools specifically focusing on the diagnosis and interpretation of knee radiological images.
PMCID:12483153
PMID: 41058736
ISSN: 2976-9337
CID: 5951872
Association between allostatic load and accelerated white matter brain aging: findings from the UK biobank
Feng, Li; Ye, Zhenyao; Du, Zewen; Pan, Yezhi; Canida, Travis; Ke, Hongjie; Liu, Song; Chen, Shuo; Hong, L Elliot; Kochunov, Peter; Chen, Jie; Lei, David K Y; Shenassa, Edmond; Ma, Tianzhou
White matter (WM) brain age, a neuroimaging-derived biomarker indicating WM microstructural changes, helps predict dementia and neurodegenerative disorder risks. The cumulative effect of chronic stress on WM brain aging remains unknown. In this study, we assessed cumulative stress using a multi-system composite allostatic load (AL) index based on inflammatory, anthropometric, respiratory, lipidemia, and glucose metabolism measures, and investigated its association with WM brain age gap (BAG), computed from diffusion tensor imaging data using a machine learning model, among 22 951 European ancestries aged 40 to 69 (51.40% women) from UK Biobank. Linear regression, Mendelian randomization, along with inverse probability weighting and doubly robust methods, were used to evaluate the impact of AL on WM BAG adjusting for age, sex, socioeconomic, and lifestyle behaviors. We found increasing one AL score unit significantly increased WM BAG by 0.29 years in association analysis and by 0.33 years in Mendelian analysis. The age- and sex-stratified analysis showed consistent results among participants 45-54 and 55-64 years old, with no significant sex difference. This study demonstrated that higher chronic stress was significantly associated with accelerated brain aging, highlighting the importance of stress management in reducing dementia and neurodegenerative disease risks.
PMID: 39393834
ISSN: 1476-6256
CID: 5751592
Spatiotemporal Implicit Neural Representation for Unsupervised Dynamic MRI Reconstruction
Feng, Jie; Feng, Ruimin; Wu, Qing; Shen, Xin; Chen, Lixuan; Li, Xin; Feng, Li; Chen, Jingjia; Zhang, Zhiyong; Liu, Chunlei; Zhang, Yuyao; Wei, Hongjiang
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has emerged as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled $\boldsymbol {k}$ -space data, which only takes spatiotemporal coordinates as inputs and does not require any training on external datasets or transfer-learning from prior images. Specifically, the proposed method encodes the dynamic MRI images into neural networks as an implicit function, and the weights of the network are learned from sparsely-acquired ( $\boldsymbol {k}$ , t)-space data itself only. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared state-of-the-art methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 0.6-2.0 dB in PSNR for high accelerations (up to $40.8\times $ ). The high-quality and inner continuity of the images provided by INR exhibit great potential to further improve the spatiotemporal resolution of dynamic MRI. The code is available at: https://github.com/AMRI-Lab/INR_for_DynamicMRI.
PMID: 40030861
ISSN: 1558-254x
CID: 5981842
Dynamic MRI with Locally Low-Rank Subspace Constraint: Towards 1-Second Temporal Resolution Aided by Deep Learning
Solomon, Eddy; Bae, Jonghyun; Moy, Linda; Heacock, Laura; Feng, Li; Kim, Sungheon Gene
MRI is the most effective method for screening high-risk breast cancer patients. While current exams primarily rely on the qualitative evaluation of morphological features before and after contrast administration and less on contrast kinetic information, the latest developments in acquisition protocols aim to combine both. However, balancing between spatial and temporal resolution poses a significant challenge in dynamic MRI. Here, we propose a radial MRI reconstruction framework for Dynamic Contrast Enhanced (DCE) imaging, which offers a joint solution to existing spatial and temporal MRI limitations. It leverages a locally low-rank (LLR) subspace model to represent spatially localized dynamics based on tissue information. Our framework demonstrated substantial improvement in CNR, noise reduction and enables a flexible temporal resolution, ranging from a few seconds to 1-second, aided by a neural network, resulting in images with reduced undersampling penalties. Finally, our reconstruction framework also shows potential benefits for head and neck, and brain MRI applications, making it a viable alternative for a range of DCE-MRI exams.
PMCID:11888544
PMID: 40060040
ISSN: 2693-5015
CID: 5981852