Browsing by Subject "Unsupervised"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Open Access Deep MRI reconstruction with generative vision transformer(Springer, 2021) Korkmaz, Yılmaz; Yurt, Mahmut; Dar, Salman Ul Hassan; Özbey, Muzaffer; Çukur, TolgaSupervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs).Item Open Access Deep unsupervised learning for accelerated MRI reconstruction(2022-07) Korkmaz, YılmazSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, this thesis introduces a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.Item Open Access Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes(Elsevier, 2023-12) Dar, Salman Ul Hassan; Öztürk, Şaban; Özbey, Muzaffer; Oğuz, Kader Karlı; Çukur, TolgaMagnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.Item Open Access Unsupervised anomaly detection via deep metric learning with end-to-end optimization(2021-07) Yılmaz, Selim FıratWe investigate unsupervised anomaly detection for high-dimensional data and introduce a deep metric learning (DML) based framework. In particular, we learn a distance metric through a deep neural network. Through this metric, we project the data into the metric space that better separates the anomalies from the normal data and reduces the effect of the curse of dimensionality for high-dimensional data. We present a novel data distillation method through self-supervision to remedy the conventional practice of assuming all data as normal. We also employ the hard mining technique from the DML literature. We show these components improve the performance of our model. Through an extensive set of experiments on the 14 real-world datasets, our method demonstrates significant performance gains compared to the state-of-the-art unsupervised anomaly detection methods, e.g., an absolute improvement between 4.44% and 11.74% on the average over the 14 datasets. Furthermore, we share the source code of our method on Github to facilitate further research.Item Open Access Unsupervised medical image translation with adversarial diffusion models(Institute of Electrical and Electronics Engineers , 2023-11-30) Özbey, Muzaffer; Dalmaz, Onat; Dar, Salman Ul Hassan; Bedel, Hasan Atakan; Özturk, Şaban; Güngör, Alper; Çukur, TolgaImputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.Item Open Access Unsupervised MRI reconstruction via zero-shot learned adversarial transformers(Institute of Electrical and Electronics Engineers Inc., 2022-01-27) Korkmaz, Yilmaz; Dar, Salman U.H.; Yurt, Mahmut; Özbey, Muzaffer; Çukur, TolgaSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.