Browsing by Author "Dalmaz, Onat"
Now showing 1 - 19 of 19
- Results Per Page
- Sort Options
Item Open Access BolT: Fused window transformers for fMRI time series analysis(Elsevier B.V., 2023-05-18) Bedel, Hasan Atakan; Şıvgın, Irmak; Dalmaz, Onat; Ul Hassan Dar, Salman ; Çukur, TolgaDeep-learning models have enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) data. Yet, many previous methods are suboptimally sensitive for contextual representations across diverse time scales. Here, we present BolT, a blood-oxygen-level-dependent transformer model, for analyzing multi-variate fMRI time series. BolT leverages a cascade of transformer encoders equipped with a novel fused window attention mechanism. Encoding is performed on temporally-overlapped windows within the time series to capture local representations. To integrate information temporally, cross-window attention is computed between base tokens in each window and fringe tokens from neighboring windows. To gradually transition from local to global representations, the extent of window overlap and thereby number of fringe tokens are progressively increased across the cascade. Finally, a novel cross-window regularization is employed to align high-level classification features across the time series. Comprehensive experiments on large-scale public datasets demonstrate the superior performance of BolT against state-of-the-art methods. Furthermore, explanatory analyses to identify landmark time points and regions that contribute most significantly to model decisions corroborate prominent neuroscientific findings in the literature.Item Open Access Bottleneck sharing generative adversarial networks for unified multi-contrast MR image synthesis(IEEE, 2022-08-29) Dalmaz, Onat; Sağlam, Baturay; Gönç, Kaan; Dar, Salman Uh.; Çukur, TolgaMagnetic Resonance Imaging (MRI) is the favored modality in multi-modal medical imaging due to its safety and ability to acquire various different contrasts of the anatomy. Availability of multiple contrasts accumulates diagnostic information and, therefore, can improve radiological observations. In some scenarios, acquiring all contrasts might be challenging due to reluctant patients and increased costs associated with additional scans. That said, synthetically obtaining missing MRI pulse sequences from the acquired sequences might prove to be useful for further analyses. Recently introduced Generative Adversarial Network (GAN) models offer state-of-the-art performance in learning MRI synthesis. However, the proposed generative approaches learn a distinct model for each conditional contrast to contrast mapping. Learning a distinct synthesis model for each individual task increases the time and memory demands due to the increased number of parameters and training time. To mitigate this issue, we propose a novel unified synthesis model, bottleneck sharing GAN (bsGAN), to consolidate learning of synthesis tasks in multi-contrast MRI. bsGAN comprises distinct convolutional encoders and decoders for each contrast to increase synthesis performance. A central information bottleneck is employed to distill hidden representations. The bottleneck, based on residual convolutional layers, is shared across contrasts to avoid introducing many learnable parameters. Qualitative and quantitative comparisons on a multi-contrast brain MRI dataset show the effectiveness of the proposed method against existing unified synthesis methods.Item Open Access COVID-19 Detection from respiratory sounds with hierarchical spectrogram transformers(Institute of Electrical and Electronics Engineers , 2023-12-05) Aytekin, Ayçe İdil; Dalmaz, Onat; Gönç, Kaan; Ankishan, H.; Sarıtaş, Emine Ülkü; Bağcı, U.; Çelik, H.; Çukur, TolgaMonitoring of prevalent airborne diseases such as COVID-19 characteristically involves respiratory assessments. While auscultation is a mainstream method for preliminary screening of disease symptoms, its util ity is hampered by the need for dedicated hospital visits. Remote monitoring based on recordings of respi ratory sounds on portable devices is a promising alter native, which can assist in early assessment of COVID-19 that primarily affects the lower respiratory tract. In this study, we introduce a novel deep learning approach to distinguish patients with COVID-19 from healthy controls given audio recordings of cough or breathing sounds. The proposed approach leverages a novel hierarchical spectro gram transformer (HST) on spectrogram representations of respiratory sounds. HST embodies self-attention mech anisms over local windows in spectrograms, and window size is progressively grown over model stages to capture local to global context. HST is compared against state-of the-art conventional and deep-learning baselines. Demon strations on crowd-sourced multi-national datasets indicate that HST outperforms competing methods, achieving over 90% area under the receiver operating characteristic curve (AUC) in detecting COVID-19 cases.Item Open Access Denoising diffusion adversarial models for unconditional medical image generation(IEEE - Institute of Electrical and Electronics Engineers, 2023-08-28) Dalmaz, Onat; Sağlam, Baturay; Elmas, Gökberk; Mirza, Muhammad Usama; Çukur, TolgaUnconditional medical image synthesis is the task of generating realistic and diverse medical images from random noise without any prior information or constraints. Synthesizing realistic medical images can enrich the quality and diversity of medical imaging datasets, which in turn, enhance the performance and generalization of deep learning models for medical imaging. Prevalent approach for synthesizing medical images involves generative adversarial networks (GAN) or denoising diffusion probabilistic models (DDPM). However, GAN models that implicitly learn the image distribution are prone to limited sample fidelity and diversity. On the other hand, diffusion models suffer from slow sampling speed due to small diffusion steps. In this paper, we propose a novel diffusion-based method for unconditional medical image synthesis, Diff-Med-Synth, that generates realistic and diverse medical images from random noise. Diff-Med-Synth combines the advantages of denoising diffusion probabilistic models and GANs to achieve fast and efficient image sampling. We evaluate our method on two multi-contrast MRI datasets and show that it outperforms state-of-the-art methods in terms of quality, diversity, and fidelity of the synthesized images.Item Open Access Detecting COVID-19 from respiratory sound recordings with transformers(S P I E - International Society for Optical Engineering, 2022-04-04) Aytekin, İdil; Dalmaz, Onat; Ankishan, Haydar; Sarıtaş, Emine Ü.; Bağcı, Ulaş; Çukur, Tolga; Çelik, HaydarAuscultation is an established technique in clinical assessment of symptoms for respiratory disorders. Auscultation is safe and inexpensive, but requires expertise to diagnose a disease using a stethoscope during hospital or office visits. However, some clinical scenarios require continuous monitoring and automated analysis of respiratory sounds to pre-screen and monitor diseases, such as the rapidly spreading COVID-19. Recent studies suggest that audio recordings of bodily sounds captured by mobile devices might carry features helpful to distinguish patients with COVID-19 from healthy controls. Here, we propose a novel deep learning technique to automatically detect COVID-19 patients based on brief audio recordings of their cough and breathing sounds. The proposed technique first extracts spectrogram features of respiratory recordings, and then classifies disease state via a hierarchical vision transformer architecture. Demonstrations are provided on a crowdsourced database of respiratory sounds from COVID-19 patients and healthy controls. The proposed transformer model is compared against alternative methods based on state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers. Our results indicate that the proposed model achieves on par or superior performance to competing methods. In particular, the proposed technique can distinguish COVID-19 patients from healthy subjects with over 94% AUC.Item Open Access edaGAN: Encoder-Decoder Attention Generative Adversarial Networks for multi-contrast MR image synthesis(Institute of Electrical and Electronics Engineers, 2022-05-16) Dalmaz, Onat; Sağlam, Baturay; Gönç, Kaan; Çukur, TolgaMagnetic resonance imaging (MRI) is the preferred modality among radiologists in the clinic due to its superior depiction of tissue contrast. Its ability to capture different contrasts within an exam session allows it to collect additional diagnostic information. However, such multi-contrast MRI exams take a long time to scan, resulting in acquiring just a portion of the required contrasts. Consequently, synthetic multi-contrast MRI can improve subsequent radiological observations and image analysis tasks like segmentation and detection. Because of this significant potential, multi-contrast MRI synthesis approaches are gaining popularity. Recently, generative adversarial networks (GAN) have become the de facto choice for synthesis tasks in medical imaging due to their sensitivity to realism and high-frequency structures. In this study, we present a novel generative adversarial approach for multi-contrast MRI synthesis that combines the learning of deep residual convolutional networks and spatial modulation introduced by an attention gating mechanism to synthesize high-quality MR images. We show the superiority of the proposed approach against various synthesis models on multi-contrast MRI datasets.Item Open Access Improving image synthesis quality in multi-contrast MRI using transfer learning via autoencoders(IEEE, 2022-08-29) Selçuk, Şahan Yoruç; Dalmaz, Onat; Ul Hassan Dar, Salman; Çukur, TolgaThe capacity of magnetic resonance imaging (MRI) to capture several contrasts within a session enables it to obtain increased diagnostic information. However, such multi-contrast MRI tests take a long time to scan, resulting in acquiring just a part of the essential contrasts. Synthetic multi-contrast MRI has the potential to improve radiological observations and consequent image analysis activities. Because of its ability to generate realistic results, generative adversarial networks (GAN) have recently been the most popular choice for medical imaging synthesis. This paper proposes a novel generative adversarial framework to improve the image synthesis quality in multi-contrast MRI. Our method uses transfer learning to adapt pre-trained autoencoder networks to the synthesis task and enhances the image synthesis quality by initializing the training process with more optimal network parameters. We demonstrate that the proposed method outperforms competing synthesis models by 0.95 dB on average on a well-known multi-contrast MRI dataset.Item Open Access Improving the performance of Batch-Constrained reinforcement learning in continuous action domains via generative adversarial networks(IEEE, 2022-08-29) Sağlam, Baturay; Dalmaz, Onat; Gönç, Kaan; Kozat, Süleyman S.The Batch-Constrained Q-learning algorithm is shown to overcome the extrapolation error and enable deep reinforcement learning agents to learn from a previously collected fixed batch of transitions. However, due to conditional Variational Autoencoders (VAE) used in the data generation module, the BCQ algorithm optimizes a lower variational bound and hence, it is not generalizable to environments with large state and action spaces. In this paper, we show that the performance of the BCQ algorithm can be further improved with the employment of one of the recent advances in deep learning, Generative Adversarial Networks. Our extensive set of experiments shows that the introduced approach significantly improves BCQ in all of the control tasks tested. Moreover, the introduced approach demonstrates robust generalizability to environments with large state and action spaces in the OpenAI Gym control suite.Item Open Access An intrinsic motivation based artificial goal generation in on-policy continuous control(IEEE, 2022-08-29) Sağlam, Baturay; Mutlu, Furkan B.; Gönç, Kaan; Dalmaz, Onat; Kozat, Süleyman S.This work adapts the existing theories on animal motivational systems into the reinforcement learning (RL) paradigm to constitute a directed exploration strategy in on-policy continuous control. We introduce a novel and scalable artificial bonus reward rule that encourages agents to visit useful state spaces. By unifying the intrinsic incentives in the reinforcement learning paradigm under the introduced deterministic reward rule, our method forces the value function to learn the values of unseen or less-known states and prevent premature behavior before sufficiently learning the environment. The simulation results show that the proposed algorithm considerably improves the state-of-the-art on-policy methods and improves the inherent entropy-based exploration.Item Restricted Körfez Savaşı Türkiye dış politikası(Bilkent University, 2018) Demirok, Hüdaverdi Alperen; Bulut, Osman; Girginkaya, Raşit Emre; Erdem, Doğa; Dalmaz, OnatSaddam yönetimindeki Irak, 25 Ağustos 1990’da Kuveyt’e saldırmış ve kısa sürede ülkede hakimiyet kurmuştur. Başta ABD olmak üzere küresel ve bölgesel hemen tüm güçler, çeşitli stratejik ve ekonomik sebeplerle işgale sert tepki göstermiş, Birleşmiş Milletler üzerinden Irak’a karşı önce ekonomik ardından askeri yaptırımları devreye sokmuşlardır. Türkiye bu süreçte, ilk andan itibaren Irak karşıtı ittifak içerisinde yer almış, tüm ekonomik yaptırımları hızlıca yürürlüğe sokmuş ve 17 Ocak 1991’de başlayan, ABD liderliğinde Irak’ı Kuveyt’ten püskürtmeyi amaçlayan askeri harekâta da aktif şekilde katılmıştır. Türkiye’nin bu periyotta takip ettiği dış politika hem kısa hem uzun vadede ciddi sonuçlara yol açmıştır. Irak’ın en büyük ticari ortaklarından olan Türkiye, ekonomik ambargo kararları sonucunda ağır ekonomik yük altına girmiş, 1991’de negatif büyüme oranı açıklanmış ve ekonomik durgunluk 1990’lar boyunca da devam etmiştir. Irak’ta savaş sonucu ortaya çıkan otorite boşluğu ve Kürt özerk bölgesi de bir diğer tartışmalı konudur. Bu çalışmada Körfez Savaşı üzerine vârolan literatürün yanı sıra, Türkiye Cumhuriyeti Dışişleri Bakanlığı’ndaki çeşitli diplomatlarla röportaj yapılarak Körfez Savaşı dış politikası incelenmiştir.Item Open Access Multi-contrast MRI synthesis with channel-exchanging-network(IEEE, 2022-08-29) Dalmaz, Onat; Aytekin, İdil; Dar, Salman Ul Hassan; Erdem, Aykut; Erdem, Erkut; Çukur, TolgaMagnetic resonance imaging (MRI) is used in many diagnostic applications as it has a high soft-tissue contrast and is a non-invasive medical imaging method. MR signal levels differs according to the parameters T1, T2 and PD that change with respect to the chemical structure of the tissues. However, long scan times might limit acquiring images from multiple contrasts or if the multi-contrasts images are acquired, the contrasts are noisy. To overcome this limitation of MRI, multi-contrast synthesis can be utilized. In this paper, we propose a deep learning method based on Channel-Exchanging-Network (CEN) for multi-contrast image synthesis. Demonstrations are provided on IXI dataset. The proposed model based on CEN is compared against alternative methods based on CNNs and GANs. Our results show that the proposed model achieves superior performance to the competing methods.Item Open Access Novel deep learning algorithms for multi-modal medical image synthesis(2023-08) Dalmaz, OnatMulti-modal medical imaging is a powerful tool for diagnosis and treatment of various diseases, as it provides complementary information about tissue morphology and function. However, acquiring multiple images from different modalities or contrasts is often impractical or impossible due to various factors such as scan time, cost, and patient comfort. Medical image translation has emerged as a promising solution to synthesize target-modality images given source-modality images. Ability to synthesize unavailable images enhance the ubiquity and utility of multi-modal protocols while decreasing examination costs and toxicity exposure such as ionizing radiation and contrast agents. Existing medical image translation methods prominently rely on generative adversarial networks (GANs) with convolutional neural networks (CNNs) backbones. CNNs are designed to perform local processing with compact filters, and this inductive bias is prone to limited contextual sensitivity. Meanwhile, GANs suffer from limited sample fidelity and diversity due to one-shot sampling and implicit characterization of the image distribution. To overcome the challenges with CNN based GAN models, in this thesis, first ResViT was introduced that leverages novel aggregated residual transformer (ART) blocks that synergistically fuse representations from convolutional and transformer modules. Then SynDiff is introduced, a conditional diffusion model that progressively maps noise and source images onto the target image via large diffusion steps and adversarial projections, capturing a direct correlate of the image distribution and improving sample quality and speed. ResViT provides a unified implementation to avoid the need to rebuild separate synthesis models for varying source-target modality configurations, whereas SynDiff enables unsupervised training on unpaired datasets via a cycle-consistent architecture. ResViT and SynDiff was demonstrated on synthesizing missing sequences in multi-contrast MRI, and CT images from MRI, and their state-of-the-art performance in medical image translation was shown.Item Open Access ResViT: residual vision transformers for multimodal medical ımage synthesis(Institute of Electrical and Electronics Engineers Inc., 2022-04-18) Dalmaz, Onat; Yurt, Mahmut; Çukur, TolgaGenerative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT’s generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.Item Open Access Semi-supervised learning of MRI synthesis without fully-sampled ground truths(IEEE, 2022-08-16) Yurt, Mahmut; Dalmaz, Onat; Dar, Salman; Özbey, Muzaffer; Tınaz, Berk; Oğuz, Kader; Çukur, TolgaLearning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.Item Open Access Skip connections for medical image synthesis with generative adversarial networks(IEEE, 2022-08-29) Mirza, Muhammad Usama; Dalmaz, Onat; Çukur, TolgaMagnetic Resonance Imaging (MRI) is an imaging technique used to produce detailed anatomical images. Acquiring multiple contrast MRI images requires long scan times forcing the patient to remain still. Scan times can be reduced by synthesising unacquired contrasts from acquired contrasts. In recent years, deep generative adversarial networks have been used to synthesise contrasts using one-to-one mapping. Deeper networks can solve more complex functions, however, their performance can decline due to problems such as overfitting and vanishing gradients. In this study, we propose adding skip connections to generative models to overcome the decline in performance with increasing complexity. This will allow the network to bypass unnecessary parameters in the model. Our results show an increase in performance in one-to-one image synthesis by integrating skip connections.Item Open Access A specificity-preserving generative model for federated MRI translation(Springer Cham, 2022-10-07) Dalmaz, Onat; Mirza, Usama; Elmas, Gökberk; Özbey, Muzaffer; Dar, Salman U. H; Çukur, Tolga; Albarqouni, Shadi; Bakas, Spyridon; Bano, Sophia; Cardoso, M. Jorge; Khanal, Bishesh; Landman, Bennett; Li, XiaoxiaoMRI translation models learn a mapping from an acquired source contrast to an unavailable target contrast. Collaboration between institutes is essential to train translation models that can generalize across diverse datasets. That said, aggregating all imaging data and training a centralized model poses privacy problems. Recently, federated learning (FL) has emerged as a collaboration framework that enables decentralized training to avoid sharing of imaging data. However, FL-trained translation models can deteriorate by the inherent heterogeneity in the distribution of MRI data. To improve reliability against domain shifts, here we introduce a novel specificity-preserving FL method for MRI contrast translation. The proposed approach is based on an adversarial model that adaptively normalizes the feature maps across the generator based on site-specific latent variables. Comprehensive FL experiments were conducted on multi-site datasets to show the effectiveness of the proposed approach against prior federated methods in MRI contrast translation.Item Open Access Unified intrinsically motivated exploration for off-policy learning in continuous action spaces(IEEE, 2022-08-29) Sağlam, Baturay; Mutlu, Furkan B.; Dalmaz, Onat; Kozat, Süleyman S.Exploration is maintained in continuous control using undirected methods, in which random noise perturbs the network parameters or selected actions. Exploration that is intrinsically driven is a good alternative to undirected techniques. However, it is only studied for discrete action domains. The intrinsic incentives in the existing reinforcement learning literature are unified together in this study by a deterministic artificial goal generation rule for off-policy learning. The agent gains additional reward through this practice if it chooses actions that lead it to useful state spaces. An extensive set of experiments indicates that the introduced artificial reward rule significantly improves the performance of the off-policy baseline algorithms.Item Open Access Unsupervised medical image translation with adversarial diffusion models(Institute of Electrical and Electronics Engineers , 2023-11-30) Özbey, Muzaffer; Dalmaz, Onat; Dar, Salman Ul Hassan; Bedel, Hasan Atakan; Özturk, Şaban; Güngör, Alper; Çukur, TolgaImputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.Item Open Access User feedback-based online learning for intent classification(Association for Computing Machinery, 2023-10-09) Gönç, Kaan; Sağlam, Baturay; Dalmaz, Onat; Çukur, Tolga; Kozat, Serdar; Dibeklioğlu, HamdiIntent classifcation is a key task in natural language processing (NLP) that aims to infer the goal or intention behind a user’s query. Most existing intent classifcation methods rely on supervised deep models trained on large annotated datasets of text-intent pairs. However, obtaining such datasets is often expensive and impractical in real-world settings. Furthermore, supervised models may overft or face distributional shifts when new intents, utterances, or data distributions emerge over time, requiring frequent retraining. Online learning methods based on user feedback can overcome this limitation, as they do not need access to intents while collecting data and adapting the model continuously. In this paper, we propose a novel multi-armed contextual bandit framework that leverages a text encoder based on a large language model (LLM) to extract the latent features of a given utterance and jointly learn multimodal representations of encoded text features and intents. Our framework consists of two stages: ofine pretraining and online fne-tuning. In the ofine stage, we train the policy on a small labeled dataset using a contextual bandit approach. In the online stage, we fne-tune the policy parameters using the REINFORCE algorithm with a user feedback-based objective, without relying on the true intents. We further introduce a sliding window strategy for simulating the retrieval of data samples during online training. This novel two-phase approach enables our method to efciently adapt to dynamic user preferences and data distributions with improved performance. An extensive set of empirical studies indicate that our method signifcantly outperforms policies that omit either offine pretraining or online fne-tuning, while achieving competitive performance to a supervised benchmark trained on an order of magnitude larger labeled dataset.