Browsing by Subject "Generative adversarial network (GAN)"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Generalizable deep mri reconstruction with cross-site data synthesis(IEEE, 2024-06-23) Nezhad, Valiyeh Ansarian; Elmas, Gökberk; Arslan, Fuat; Kabas, Bilal; Çukur, TolgaDeep learning techniques have enabled leaps in MRI reconstruction from undersampled acquisitions. While they yields high performance when tested on data from sites that the training data originates, they suffer from performance losses when tested on separate sites. In this work, we proposed a novel learning technique to improve generalization in deep MRI reconstruction. The proposed method employs cross-site data synthesis to benefit from multi-site data without introducing patient privacy risks. First, MRI priors are captured via generative adversarial models trained at each site independently. These priors are shared across sites, and then used to synthesize data from multiple sites. Afterwards, MRI reconstruction models are trained using these synthetic data. Experiments indicate that the proposed method attains higher generalization against single-site models, and higher site-specific performance against site-average models.Item Open Access Prior-Guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networks(IEEE, 2020) Dar, Salman U.H.; Yurt, Mahmut; Shahdloo, Mohammad; Ildız, Muhammed Emrullah; Tınaz, Berk; Çukur, TolgaMulti-contrast MRI acquisitions of an anatomy enrich the magnitude of information available for diagnosis. Yet, excessive scan times associated with additional contrasts may be a limiting factor. Two mainstream frameworks for enhanced scan efficiency are reconstruction of undersampled acquisitions and synthesis of missing acquisitions. Recently, deep learning methods have enabled significant performance improvements in both frameworks. Yet, reconstruction performance decreases towards higher acceleration factors with diminished sampling density at high-spatial-frequencies, whereas synthesis can manifest artefactual sensitivity or insensitivity to image features due to the absence of data samples from the target contrast. In this article, we propose a new approach for synergistic recovery of undersampled multi-contrast acquisitions based on conditional generative adversarial networks. The proposed method mitigates the limitations of pure learning-based reconstruction or synthesis by utilizing three priors: shared high-frequency prior available in the source contrast to preserve high-spatial-frequency details, low-frequency prior available in the undersampled target contrast to prevent feature leakage/loss, and perceptual prior to improve recovery of high-level features. Demonstrations on brain MRI datasets from healthy subjects and patients indicate the superior performance of the proposed method compared to pure reconstruction and synthesis methods. The proposed method can help improve the quality and scan efficiency of multi-contrast MRI exams.