Browsing by Subject "Generative"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Open Access Adaptive diffusion priors for accelerated MRI reconstruction(Elsevier B.V., 2023-07-20) Güngör, Alper; Dar, Salman Ul Hassan; Öztürk, Şaban; Korkmaz, Yılmaz; Bedel, Hasan Atakan; Elmas, Gökberk; Özbey, Muzaffer; Çukur, TolgaDeep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance. © 2023 Elsevier B.V.Item Open Access Deep MRI reconstruction with generative vision transformer(Springer, 2021) Korkmaz, Yılmaz; Yurt, Mahmut; Dar, Salman Ul Hassan; Özbey, Muzaffer; Çukur, TolgaSupervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs).Item Open Access Deep unsupervised learning for accelerated MRI reconstruction(Bilkent University, 2022-07) Korkmaz, YılmazSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, this thesis introduces a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.Item Open Access edaGAN: Encoder-Decoder Attention Generative Adversarial Networks for multi-contrast MR image synthesis(Institute of Electrical and Electronics Engineers, 2022-05-16) Dalmaz, Onat; Sağlam, Baturay; Gönç, Kaan; Çukur, TolgaMagnetic resonance imaging (MRI) is the preferred modality among radiologists in the clinic due to its superior depiction of tissue contrast. Its ability to capture different contrasts within an exam session allows it to collect additional diagnostic information. However, such multi-contrast MRI exams take a long time to scan, resulting in acquiring just a portion of the required contrasts. Consequently, synthetic multi-contrast MRI can improve subsequent radiological observations and image analysis tasks like segmentation and detection. Because of this significant potential, multi-contrast MRI synthesis approaches are gaining popularity. Recently, generative adversarial networks (GAN) have become the de facto choice for synthesis tasks in medical imaging due to their sensitivity to realism and high-frequency structures. In this study, we present a novel generative adversarial approach for multi-contrast MRI synthesis that combines the learning of deep residual convolutional networks and spatial modulation introduced by an attention gating mechanism to synthesize high-quality MR images. We show the superiority of the proposed approach against various synthesis models on multi-contrast MRI datasets.Item Open Access Federated learning of generative ımage priors for MRI reconstruction(Institute of Electrical and Electronics Engineers Inc., 2022-11-09) Elmas, Gökberk; Dar, Salman UH.; Korkmaz, Yilmaz; Ceyani, E.; Susam, Burak; Ozbey, Muzaffer; Avestimehr, S.; Çukur, TolgaMulti-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data. Federated learning (FL) has recently been introduced to address privacy concerns by enabling distributed training without transfer of imaging data. Existing FL methods employ conditional reconstruction models to map from undersampled to fully-sampled acquisitions via explicit knowledge of the accelerated imaging operator. Since conditional models generalize poorly across different acceleration rates or sampling densities, imaging operators must be fixed between training and testing, and they are typically matched across sites. To improve patient privacy, performance and flexibility in multi-site collaborations, here we introduce Federated learning of Generative IMage Priors (FedGIMP) for MRI reconstruction. FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and prior adaptation following injection of the imaging operator. The global MRI prior is learned via an unconditional adversarial model that synthesizes high-quality MR images based on latent variables. A novel mapper subnetwork produces site-specific latents to maintain specificity in the prior. During inference, the prior is first combined with subject-specific imaging operators to enable reconstruction, and it is then adapted to individual cross-sections by minimizing a data-consistency loss. Comprehensive experiments on multi-institutional datasets clearly demonstrate enhanced performance of FedGIMP against both centralized and FL methods based on conditional modelsItem Open Access Federated MRI reconstruction with deep generative models(2023-07) Elmas, GökberkMulti-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data. Federated learning (FL) has recently been introduced to address privacy concerns by enabling distributed training without transfer of imaging data. Existing FL methods employ conditional reconstruction models to map from undersampled to fully-sampled acquisitions via explicit knowledge of the accelerated imaging operator. Since conditional models generalize poorly across different acceleration rates or sampling densities, imaging operators must be fixed between training and testing, and they are typically matched across sites. To improve patient privacy, performance and flexibility in multi-site collaborations, here we introduce Federated learning of Generative IMage Priors (FedGIMP) for MRI reconstruction. FedG-IMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and prior adaptation following injection of the imaging operator. The global MRI prior is learned via an unconditional adversarial model that synthesizes high-quality MR images based on latent variables. A novel mapper subnetwork produces site-specific latents to maintain specificity in the prior. During inference, the prior is first combined with subject-specific imaging operators to enable reconstruction, and it is then adapted to individual cross-sections by minimizing a data-consistency loss. Comprehensive experiments on multi-institutional datasets clearly demonstrate enhanced performance of FedGIMP against both centralized and FL methods based on conditional models.Item Open Access MRI reconstruction with conditional adversarial transformers(Springer Cham, 2022-09-22) Korkmaz, Yılmaz; Özbey, Muzaffer; Çukur, Tolga; Haq, Nandinee; Johnson, Patricia; Maier, Andreas; Qin, Chen; Würfl, Tobias; Yoo, JaejunDeep learning has been successfully adopted for accelerated MRI reconstruction given its exceptional performance in inverse problems. Deep reconstruction models are commonly based on convolutional neural network (CNN) architectures that use compact input-invariant filters to capture static local features in data. While this inductive bias allows efficient model training on relatively small datasets, it also limits sensitivity to long-range context and compromises generalization performance. Transformers are a promising alternative that use broad-scale and input-adaptive filtering to improve contextual sensitivity and generalization. Yet, existing transformer architectures induce quadratic complexity and they often neglect the physical signal model. Here, we introduce a model-based transformer architecture (MoTran) for high-performance MRI reconstruction. MoTran is an adversarial architecture that unrolls transformer and data-consistency blocks in its generator. Cross-attention transformers are leveraged to maintain linear complexity in terms of the feature map size. Comprehensive experiments on MRI reconstruction tasks show that the proposed model improves the image quality over state-of-the-art CNN models.Item Open Access ResViT: residual vision transformers for multimodal medical ımage synthesis(Institute of Electrical and Electronics Engineers Inc., 2022-04-18) Dalmaz, Onat; Yurt, Mahmut; Çukur, TolgaGenerative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT’s generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.Item Open Access Super-resolution diffusion model for accelerated MRI reconstruction(IEEE - Institute of Electrical and Electronics Engineers, 2023-08-28) Mirza, Muhammad Usama; Çukur, TolgaMRI reconstruction is a process to generate high-quality images from the raw data obtained during magnetic resonance imaging. Diffusion models, a class of generative models, have become a popular method for MRI Reconstruction due to their ability to generate high quality images. Diffusion models work by adding Gaussian noise to the original image and training a network to remove the noise. Diffusion models can continue to generate high quality images even with a different type of noise added to the original image. In this study we combine a resolution decreasing operator with noise scheduling used by regular diffusion models, ResDiff to perform MRI Reconstruction. One of the biggest drawbacks of Diffusion models is the amount of time taken to generate images. Down-sampling images to a lower resolution requires fewer steps allowing ResDiff to achieve competitive results in far less time.Item Open Access Unsupervised medical image translation with adversarial diffusion models(Institute of Electrical and Electronics Engineers , 2023-11-30) Özbey, Muzaffer; Dalmaz, Onat; Dar, Salman Ul Hassan; Bedel, Hasan Atakan; Özturk, Şaban; Güngör, Alper; Çukur, TolgaImputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.Item Open Access Unsupervised MRI reconstruction via zero-shot learned adversarial transformers(Institute of Electrical and Electronics Engineers Inc., 2022-01-27) Korkmaz, Yilmaz; Dar, Salman U.H.; Yurt, Mahmut; Özbey, Muzaffer; Çukur, TolgaSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.