Show simple item record

dc.contributor.authorDar, Salman U.H.
dc.contributor.authorYurt, Mahmut
dc.contributor.authorShahdloo, Mohammad
dc.contributor.authorIldız, Muhammed Emrullah
dc.contributor.authorTınaz, Berk
dc.contributor.authorÇukur, Tolga
dc.date.accessioned2021-02-18T07:59:04Z
dc.date.available2021-02-18T07:59:04Z
dc.date.issued2020
dc.identifier.issn1932-4553
dc.identifier.urihttp://hdl.handle.net/11693/75427
dc.description.abstractMulti-contrast MRI acquisitions of an anatomy enrich the magnitude of information available for diagnosis. Yet, excessive scan times associated with additional contrasts may be a limiting factor. Two mainstream frameworks for enhanced scan efficiency are reconstruction of undersampled acquisitions and synthesis of missing acquisitions. Recently, deep learning methods have enabled significant performance improvements in both frameworks. Yet, reconstruction performance decreases towards higher acceleration factors with diminished sampling density at high-spatial-frequencies, whereas synthesis can manifest artefactual sensitivity or insensitivity to image features due to the absence of data samples from the target contrast. In this article, we propose a new approach for synergistic recovery of undersampled multi-contrast acquisitions based on conditional generative adversarial networks. The proposed method mitigates the limitations of pure learning-based reconstruction or synthesis by utilizing three priors: shared high-frequency prior available in the source contrast to preserve high-spatial-frequency details, low-frequency prior available in the undersampled target contrast to prevent feature leakage/loss, and perceptual prior to improve recovery of high-level features. Demonstrations on brain MRI datasets from healthy subjects and patients indicate the superior performance of the proposed method compared to pure reconstruction and synthesis methods. The proposed method can help improve the quality and scan efficiency of multi-contrast MRI exams.en_US
dc.description.sponsorshipThis work was supported in part by a European Molecular Biology Organization Installation under Grant (IG 3028), in part by a TUBITAK 1001 under Grant 118E256, in part by a TUBA GEBIP fellowship, in part by a BAGEP fellowship awarded to T. Çukur, and in part by Marie Curie Actions Career Integration Grant PCIG13-GA-2013-618101.en_US
dc.language.isoEnglishen_US
dc.source.titleIEEE Journal on Selected Topics in Signal Processingen_US
dc.relation.isversionofhttps://dx.doi.org/10.1109/JSTSP.2020.3001737en_US
dc.subjectGenerative adversarial network (GAN)en_US
dc.subjectSynthesisen_US
dc.subjectReconstructionen_US
dc.subjectMulti contrasten_US
dc.subjectMagnetic resonance imaging (MRI)en_US
dc.subjectPrioren_US
dc.titlePrior-Guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networksen_US
dc.typeArticleen_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.departmentNational Magnetic Resonance Research Center (UMRAM)en_US
dc.citation.spage1072en_US
dc.citation.epage1087en_US
dc.citation.volumeNumber14en_US
dc.citation.issueNumber6en_US
dc.identifier.doi10.1109/JSTSP.2020.3001737en_US
dc.publisherIEEEen_US
dc.contributor.bilkentauthorDar, Salman U.H.
dc.contributor.bilkentauthorYurt, Mahmut
dc.contributor.bilkentauthorShahdloo, Mohammad
dc.contributor.bilkentauthorIldız, Muhammed Emrullah
dc.contributor.bilkentauthorTınaz, Berk
dc.contributor.bilkentauthorÇukur, Tolga


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record