Browsing by Author "Dar, Salman Ul Hassan"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Open Access Adaptive diffusion priors for accelerated MRI reconstruction(Elsevier B.V., 2023-07-20) Güngör, Alper; Dar, Salman Ul Hassan; Öztürk, Şaban; Korkmaz, Yılmaz; Bedel, Hasan Atakan; Elmas, Gökberk; Özbey, Muzaffer; Çukur, TolgaDeep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance. © 2023 Elsevier B.V.Item Open Access Category-selective top-down modulation in the fusiform face area of the human brain during visual search(IEEE, 2017) Dar, Salman Ul Hassan; Çukur, TolgaSeveral regions in the ventral-temporal cortex of the human brain are thought to have representations of specific categories of objects. Furthermore, a distributed network of frontal and parietal brain regions is implicated in attentional control. It is assumed that during visual search, attention-control regions send top-down signals to the target category-selective areas to bias the processing in favour of the attended object category. However, little is known about such causal interactions during naturalistic visual search. Here we assess the influence of attention-control brain regions on a well-known face selective area fusiform face area (FFA) during natural visual search using Granger causality analysis. Our results indicate that attending to humans enhances the influence of attention-control regions on the fusiform face area.Item Open Access Deep learning for accelerated MR imaging(Bilkent University, 2021-02) Dar, Salman Ul HassanMagnetic resonance imaging is a non-invasive imaging modality that enables multi-contrast acquisition of an underlying anatomy, thereby supplementing mul-titude of information for diagnosis. However, prolonged scan duration may pro-hibit its practical use. Two mainstream frameworks for accelerating MR image acquisitions are reconstruction and synthesis. In reconstruction, acquisitions are accelerated by undersampling in k-space, followed by reconstruction algorithms. Lately deep neural networks have offered significant improvements over tradi-tional methods in MR image reconstruction. However, deep neural networks rely heavily on availability of large datasets which might not be readily available for some applications. Furthermore, a caveat of the reconstruction framework in general is that the performance naturally starts degrading towards higher accel-eration factors where fewer data samples are acquired. In the alternative syn-thesis framework, acquisitions are accelerated by acquiring a subset of desired contrasts, and recovering the missing ones from the acquired ones. Current syn-thesis methods are primarily based on deep neural networks, which are trained to minimize mean square or absolute loss functions. This can bring about loss of intermediate-to-high spatial frequency content in the recovered images. Fur-thermore, the synthesis performance in general relies on similarity in relaxation parameters between source and target contrasts, and large dissimilarities can lead to artifactual synthesis or loss of features. Here, we tackle issues associated with reconstruction and synthesis approaches. In reconstruction, the data scarcity is-sue is addressed by pre-training a network on large readily available datasets, and fine-tuning on just a few samples from target datasets. In synthesis, the loss of intermediate-to-high spatial frequency is catered for by adding adversarial and high-level perceptual losses on top of traditional mean absolute error. Fi-nally, a joint reconstruction and synthesis approach is proposed to mitigate the issues associated with both reconstruction and synthesis approaches in general. Demonstrations on MRI brain datasets of healthy subjects and patients indicate superior performance of the proposed techniques over the current state-of-the art ones.Item Open Access Deep MRI reconstruction with generative vision transformer(Springer, 2021) Korkmaz, Yılmaz; Yurt, Mahmut; Dar, Salman Ul Hassan; Özbey, Muzaffer; Çukur, TolgaSupervised training of deep network models for MRI reconstruction requires access to large databases of fully-sampled MRI acquisitions. To alleviate dependency on costly databases, unsupervised learning strategies have received interest. A powerful framework that eliminates the need for training data altogether is the deep image prior (DIP). To do this, DIP inverts randomly-initialized models to infer network parameters most consistent with the undersampled test data. However, existing DIP methods leverage convolutional backbones, suffering from limited sensitivity to long-range spatial dependencies and thereby poor model invertibility. To address these limitations, here we propose an unsupervised MRI reconstruction based on a novel generative vision transformer (GVTrans). GVTrans progressively maps low-dimensional noise and latent variables onto MR images via cascaded blocks of cross-attention vision transformers. Cross-attention mechanism between latents and image features serve to enhance representational learning of local and global context. Meanwhile, latent and noise injections at each network layer permit fine control of generated image features, improving model invertibility. Demonstrations are performed for scan-specific reconstruction of brain MRI data at multiple contrasts and acceleration factors. GVTrans yields superior performance to state-of-the-art generative models based on convolutional neural networks (CNNs).Item Open Access Factorized sensitivity estimation for artifact suppression in phase‐cycled bSSFP MRI(Wiley, 2020) Bıyık, Erdem; Keskin, Kübra; Dar, Salman Ul Hassan; Koç, Aykut; Çukur, TolgaObjective: Balanced steady‐state free precession (bSSFP) imaging suffers from banding artifacts in the presence of magnetic field inhomogeneity. The purpose of this study is to identify an efficient strategy to reconstruct banding‐free bSSFP images from multi‐coil multi‐acquisition datasets. Method: Previous techniques either assume that a naïve coil‐combination is performed a priori resulting in suboptimal artifact suppression, or that artifact suppression is performed for each coil separately at the expense of significant computational burden. Here we propose a tailored method that factorizes the estimation of coil and bSSFP sensitivity profiles for improved accuracy and/or speed. Results: In vivo experiments show that the proposed method outperforms naïve coil‐combination and coil‐by‐coil processing in terms of both reconstruction quality and time. Conclusion: The proposed method enables computationally efficient artifact suppression for phase‐cycled bSSFP imaging with modern coil arrays. Rapid imaging applications can efficiently benefit from the improved robustness of bSSFP imaging against field inhomogeneity.Item Open Access Multi-contrast MRI synthesis with channel-exchanging-network(IEEE, 2022-08-29) Dalmaz, Onat; Aytekin, İdil; Dar, Salman Ul Hassan; Erdem, Aykut; Erdem, Erkut; Çukur, TolgaMagnetic resonance imaging (MRI) is used in many diagnostic applications as it has a high soft-tissue contrast and is a non-invasive medical imaging method. MR signal levels differs according to the parameters T1, T2 and PD that change with respect to the chemical structure of the tissues. However, long scan times might limit acquiring images from multiple contrasts or if the multi-contrasts images are acquired, the contrasts are noisy. To overcome this limitation of MRI, multi-contrast synthesis can be utilized. In this paper, we propose a deep learning method based on Channel-Exchanging-Network (CEN) for multi-contrast image synthesis. Demonstrations are provided on IXI dataset. The proposed model based on CEN is compared against alternative methods based on CNNs and GANs. Our results show that the proposed model achieves superior performance to the competing methods.Item Open Access Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes(Elsevier, 2023-12) Dar, Salman Ul Hassan; Öztürk, Şaban; Özbey, Muzaffer; Oğuz, Kader Karlı; Çukur, TolgaMagnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.Item Open Access Spatially informed voxelwise modeling for naturalistic fMRI experiments(Elsevier, 2019) Çelik, Emin; Dar, Salman Ul Hassan; Yılmaz, Özgür; Keleş, Ümit; Çukur, TolgaVoxelwise modeling (VM) is a powerful framework to predict single voxel responses evoked by a rich set of stimulus features present in complex natural stimuli. However, because VM disregards correlations across neighboring voxels, its sensitivity in detecting functional selectivity can be diminished in the presence of high levels of measurement noise. Here, we introduce spatially-informed voxelwise modeling (SPIN-VM) to take advantage of response correlations in spatial neighborhoods of voxels. To optimally utilize shared information, SPIN-VM performs regularization across spatial neighborhoods in addition to model features, while still generating single-voxel response predictions. We demonstrated the performance of SPIN-VM on a rich dataset from a natural vision experiment. Compared to VM, SPIN-VM yields higher prediction accuracies and better capture locally congruent information representations across cortex. These results suggest that SPIN-VM offers improved performance in predicting single-voxel responses and recovering coherent information representations.Item Open Access A transfer-learning approach for accelerated MRI using deep neural networks(Wiley, 2020) Dar, Salman Ul Hassan; Özbey, Muzaffer; Çatlı, Ahmet Burak; Çukur, TolgaPurpose: Neural networks have received recent interest for reconstruction of undersampled MR acquisitions. Ideally, network performance should be optimized by drawing the training and testing data from the same domain. In practice, however, large datasets comprising hundreds of subjects scanned under a common protocol are rare. The goal of this study is to introduce a transfer‐learning approach to address the problem of data scarcity in training deep networks for accelerated MRI. Methods: Neural networks were trained on thousands (upto 4 thousand) of samples from public datasets of either natural images or brain MR images. The networks were then fine‐tuned using only tens of brain MR images in a distinct testing domain. Domain‐transferred networks were compared to networks trained directly in the testing domain. Network performance was evaluated for varying acceleration factors (4‐10), number of training samples (0.5‐4k), and number of fine‐tuning samples (0‐100). Results: The proposed approach achieves successful domain transfer between MR images acquired with different contrasts (T1‐ and T2‐weighted images) and between natural and MR images (ImageNet and T1‐ or T2‐weighted images). Networks obtained via transfer learning using only tens of images in the testing domain achieve nearly identical performance to networks trained directly in the testing domain using thousands (upto 4 thousand) of images. Conclusion: The proposed approach might facilitate the use of neural networks for MRI reconstruction without the need for collection of extensive imaging datasets.Item Open Access Unsupervised medical image translation with adversarial diffusion models(Institute of Electrical and Electronics Engineers , 2023-11-30) Özbey, Muzaffer; Dalmaz, Onat; Dar, Salman Ul Hassan; Bedel, Hasan Atakan; Özturk, Şaban; Güngör, Alper; Çukur, TolgaImputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.