• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Bottleneck sharing generative adversarial networks for unified multi-contrast MR image synthesis

      Thumbnail
      View / Download
      1.6 Mb
      Author(s)
      Dalmaz, Onat
      Sağlam, Baturay
      Gönç, Kaan
      Dar, Salman Uh.
      Çukur, Tolga
      Date
      2022-08-29
      Source Title
      Signal Processing and Communications Applications Conference (SIU)
      Print ISSN
      2165-0608
      Publisher
      IEEE
      Pages
      [1] - [4]
      Language
      English
      Type
      Conference Paper
      Item Usage Stats
      13
      views
      3
      downloads
      Abstract
      Magnetic Resonance Imaging (MRI) is the favored modality in multi-modal medical imaging due to its safety and ability to acquire various different contrasts of the anatomy. Availability of multiple contrasts accumulates diagnostic information and, therefore, can improve radiological observations. In some scenarios, acquiring all contrasts might be challenging due to reluctant patients and increased costs associated with additional scans. That said, synthetically obtaining missing MRI pulse sequences from the acquired sequences might prove to be useful for further analyses. Recently introduced Generative Adversarial Network (GAN) models offer state-of-the-art performance in learning MRI synthesis. However, the proposed generative approaches learn a distinct model for each conditional contrast to contrast mapping. Learning a distinct synthesis model for each individual task increases the time and memory demands due to the increased number of parameters and training time. To mitigate this issue, we propose a novel unified synthesis model, bottleneck sharing GAN (bsGAN), to consolidate learning of synthesis tasks in multi-contrast MRI. bsGAN comprises distinct convolutional encoders and decoders for each contrast to increase synthesis performance. A central information bottleneck is employed to distill hidden representations. The bottleneck, based on residual convolutional layers, is shared across contrasts to avoid introducing many learnable parameters. Qualitative and quantitative comparisons on a multi-contrast brain MRI dataset show the effectiveness of the proposed method against existing unified synthesis methods.
      Keywords
      Unified
      MRI synthesis
      Bottleneck
      Parameter-sharing
      Generative adversarial networks
      Permalink
      http://hdl.handle.net/11693/111302
      Published Version (Please cite this version)
      https://www.doi.org/10.1109/SIU55565.2022.9864880
      Collections
      • Department of Computer Engineering 1561
      • Department of Electrical and Electronics Engineering 4011
      • National Magnetic Resonance Research Center (UMRAM) 301
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCoursesThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCourses

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 2976
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy