Browsing by Author "Özbey, Muzaffer"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Open Access Adaptive diffusion priors for accelerated MRI reconstruction(Elsevier B.V., 2023-07-20) Güngör, Alper; Dar, Salman Ul Hassan; Öztürk, Şaban; Korkmaz, Yılmaz; Bedel, Hasan Atakan; Elmas, Gökberk; Özbey, Muzaffer; Çukur, TolgaDeep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance. © 2023 Elsevier B.V.Item Open Access Çoklu kontrast MRG’de çoklu görüntü geriçatımı(IEEE, 2021-07-19) Özbey, Muzaffer; Çukur, TolgaÇoklu kontrastlı manyetik rezonans görüntülerinin (MRG) edinimi, tanı bilgi birikimini artırarak klinik tanıda önemli bir role sahiptir. Hastanın hareketsiz kalması gereken uzun tetkik süreleri, çoklu kontrast MRG edinimini sınırlandırmaktadır. Görüntülerin alt örneklenerek toplanması ve geriçatımı ile tarama süreleri kısaltılabilmektedir. Yaygın yöntemler, tek kontrasta ait alt örneklenmiş MR görüntülerinden aynı kontrasta ait tam örneklenmiş MR görüntüsü üretmektedir. Ancak girdi verisindeki tek kontrastlı MR görüntüsüne ait sınırlı bilgiler, geriçatım performansını sınırlandırmaktadır. Bu yüzden, çoklu kontrast MRG girdi verilerinin kullanımı ile geriçatım performansı artırılabilir. Bu çalışma kapsamında, birden fazla kontrasta ait alt örneklenmiş görüntülerden, tam örneklenmiş görüntüleri eş zamanlı olarak üreten bir çoklu kontrast MRG geriçatım yöntemi önerilmiştir. Önerilen yöntem, yüksek frekans değerlerini daha iyi tahmin ederek oldukça gerçekçi görüntüler üreten çekişmeli üretici ağlar kullanılarak uygulanmıştır. Önerilen yöntem, çoklu kontrast beyin MR görüntüleri içeren verisetinde test edilmiş, sayısal ve görsel değerlendirmeler sonucunda alternatif tekli kontrast geriçatım yöntemine göre daha üstün performans sağladığı kanıtlanmıştır.Item Open Access Deep MRI reconstruction with generative vision transformer(Springer, 2021) Korkmaz, Yılmaz; Yurt, Mahmut; Dar, Salman Ul Hassan; Özbey, Muzaffer; Çukur, TolgaSupervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs).Item Open Access MRI reconstruction with conditional adversarial transformers(Springer Cham, 2022-09-22) Korkmaz, Yılmaz; Özbey, Muzaffer; Çukur, Tolga; Haq, Nandinee; Johnson, Patricia; Maier, Andreas; Qin, Chen; Würfl, Tobias; Yoo, JaejunDeep learning has been successfully adopted for accelerated MRI reconstruction given its exceptional performance in inverse problems. Deep reconstruction models are commonly based on convolutional neural network (CNN) architectures that use compact input-invariant filters to capture static local features in data. While this inductive bias allows efficient model training on relatively small datasets, it also limits sensitivity to long-range context and compromises generalization performance. Transformers are a promising alternative that use broad-scale and input-adaptive filtering to improve contextual sensitivity and generalization. Yet, existing transformer architectures induce quadratic complexity and they often neglect the physical signal model. Here, we introduce a model-based transformer architecture (MoTran) for high-performance MRI reconstruction. MoTran is an adversarial architecture that unrolls transformer and data-consistency blocks in its generator. Cross-attention transformers are leveraged to maintain linear complexity in terms of the feature map size. Comprehensive experiments on MRI reconstruction tasks show that the proposed model improves the image quality over state-of-the-art CNN models.Item Open Access Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes(Elsevier, 2023-12) Dar, Salman Ul Hassan; Öztürk, Şaban; Özbey, Muzaffer; Oğuz, Kader Karlı; Çukur, TolgaMagnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.Item Open Access Progressively volumetrized deep generative models for data-efficient contextual learning of MR image recovery(Elsevier BV, 2022-05) Yurt, Mahmut; Özbey, Muzaffer; Dar, Salman U.H.; Tınaz, Berk; Oğuz, Kader K.; Çukur, TolgaMagnetic resonance imaging (MRI) offers the flexibility to image a given anatomic volume under a multi- tude of tissue contrasts. Yet, scan time considerations put stringent limits on the quality and diversity of MRI data. The gold-standard approach to alleviate this limitation is to recover high-quality images from data undersampled across various dimensions, most commonly the Fourier domain or contrast sets. A primary distinction among recovery methods is whether the anatomy is processed per volume or per cross-section. Volumetric models offer enhanced capture of global contextual information, but they can suffer from suboptimal learning due to elevated model complexity. Cross-sectional models with lower complexity offer improved learning behavior, yet they ignore contextual information across the longitu- dinal dimension of the volume. Here, we introduce a novel progressive volumetrization strategy for gen- erative models (ProvoGAN) that serially decomposes complex volumetric image recovery tasks into suc- cessive cross-sectional mappings task-optimally ordered across individual rectilinear dimensions. Provo-GAN effectively captures global context and recovers fine-structural details across all dimensions, while maintaining low model complexity and improved learning behavior. Comprehensive demonstrations on mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields superior performance to state-of-the-art volumetric and cross-sectional models.Item Open Access Semi-supervised learning of MRI synthesis without fully-sampled ground truths(IEEE, 2022-08-16) Yurt, Mahmut; Dalmaz, Onat; Dar, Salman; Özbey, Muzaffer; Tınaz, Berk; Oğuz, Kader; Çukur, TolgaLearning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.Item Open Access A specificity-preserving generative model for federated MRI translation(Springer Cham, 2022-10-07) Dalmaz, Onat; Mirza, Usama; Elmas, Gökberk; Özbey, Muzaffer; Dar, Salman U. H; Çukur, Tolga; Albarqouni, Shadi; Bakas, Spyridon; Bano, Sophia; Cardoso, M. Jorge; Khanal, Bishesh; Landman, Bennett; Li, XiaoxiaoMRI translation models learn a mapping from an acquired source contrast to an unavailable target contrast. Collaboration between institutes is essential to train translation models that can generalize across diverse datasets. That said, aggregating all imaging data and training a centralized model poses privacy problems. Recently, federated learning (FL) has emerged as a collaboration framework that enables decentralized training to avoid sharing of imaging data. However, FL-trained translation models can deteriorate by the inherent heterogeneity in the distribution of MRI data. To improve reliability against domain shifts, here we introduce a novel specificity-preserving FL method for MRI contrast translation. The proposed approach is based on an adversarial model that adaptively normalizes the feature maps across the generator based on site-specific latent variables. Comprehensive FL experiments were conducted on multi-site datasets to show the effectiveness of the proposed approach against prior federated methods in MRI contrast translation.Item Open Access A transfer-learning approach for accelerated MRI using deep neural networks(Wiley, 2020) Dar, Salman Ul Hassan; Özbey, Muzaffer; Çatlı, Ahmet Burak; Çukur, TolgaPurpose: Neural networks have received recent interest for reconstruction of undersampled MR acquisitions. Ideally, network performance should be optimized by drawing the training and testing data from the same domain. In practice, however, large datasets comprising hundreds of subjects scanned under a common protocol are rare. The goal of this study is to introduce a transfer‐learning approach to address the problem of data scarcity in training deep networks for accelerated MRI. Methods: Neural networks were trained on thousands (upto 4 thousand) of samples from public datasets of either natural images or brain MR images. The networks were then fine‐tuned using only tens of brain MR images in a distinct testing domain. Domain‐transferred networks were compared to networks trained directly in the testing domain. Network performance was evaluated for varying acceleration factors (4‐10), number of training samples (0.5‐4k), and number of fine‐tuning samples (0‐100). Results: The proposed approach achieves successful domain transfer between MR images acquired with different contrasts (T1‐ and T2‐weighted images) and between natural and MR images (ImageNet and T1‐ or T2‐weighted images). Networks obtained via transfer learning using only tens of images in the testing domain achieve nearly identical performance to networks trained directly in the testing domain using thousands (upto 4 thousand) of images. Conclusion: The proposed approach might facilitate the use of neural networks for MRI reconstruction without the need for collection of extensive imaging datasets.Item Restricted TÜBİTAK Matematik Olimpiyatı tarihi ve Türk eğitim sistemindeki yeri(Bilkent University, 2015) Özütemiz, Hasan Hüseyin; Karahan, İbrahim Ethem; Akça, Muhammed Enes; Özbey, Muzaffer; Şimşek, YasinItem Open Access Unsupervised medical image translation with adversarial diffusion models(Institute of Electrical and Electronics Engineers , 2023-11-30) Özbey, Muzaffer; Dalmaz, Onat; Dar, Salman Ul Hassan; Bedel, Hasan Atakan; Özturk, Şaban; Güngör, Alper; Çukur, TolgaImputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.Item Open Access Unsupervised MRI reconstruction via zero-shot learned adversarial transformers(Institute of Electrical and Electronics Engineers Inc., 2022-01-27) Korkmaz, Yilmaz; Dar, Salman U.H.; Yurt, Mahmut; Özbey, Muzaffer; Çukur, TolgaSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.