Browsing by Subject "Transformers"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Open Access Deep unsupervised learning for accelerated MRI reconstruction(2022-07) Korkmaz, YılmazSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, this thesis introduces a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.Item Open Access Face inpainting with pre-trained image transformers(IEEE, 2022-08-29) Gönç, Kaan; Sağlam, Baturay; Kozat, Süleyman S.; Dibeklioğlu, HamdiImage inpainting is an underdetermined inverse problem that allows various contents to fill in the missing or damaged regions realistically. Convolutional neural networks (CNNs) are commonly used to create aesthetically pleasing content, yet CNNs have restricted perception fields for collecting global characteristics. Transformers enable long-range relationships to be modeled and different content generated with autoregressive modeling of pixel-sequence distributions using image-level attention mechanism. However, the current approaches to inpainting with transformers are limited to task-specific datasets and require larger-scale data. We introduce an approach to image inpainting by leveraging pre-trained vision transformers to remedy this issue. Experiments show that our approach can outperform CNN-based approaches and have a remarkable performance closer to the task-specific transformer methods.Item Open Access Multivariate time series imputation with transformers(IEEE, 2022-11-25) Yıldız, A. Yarkın; Koç, Emirhan; Koç, AykutProcessing time series with missing segments is a fundamental challenge that puts obstacles to advanced analysis in various disciplines such as engineering, medicine, and economics. One of the remedies is imputation to fill the missing values based on observed values properly without undermining performance. We propose the Multivariate Time-Series Imputation with Transformers (MTSIT), a novel method that uses transformer architecture in an unsupervised manner for missing value imputation. Unlike the existing transformer architectures, this model only uses the encoder part of the transformer due to computational benefits. Crucially, MTSIT trains the autoencoder by jointly reconstructing and imputing stochastically-masked inputs via an objective designed for multivariate time-series data. The trained autoencoder is then evaluated for imputing both simulated and real missing values. Experiments show that MTSIT outperforms state-of-the-art imputation methods over benchmark datasets.Item Open Access RadGT: graph and transformer-based automotive radar point cloud segmentation(Institute of Electrical and Electronics Engineers, 2023-10-25) Sevimli, R. A.; Ucuncu, M.; Koç, AykutThe need for visual perception systems providing situational awareness to autonomous vehicles has grown significantly. While traditional deep neural networks are effective for solving 2-D Euclidean problems, point cloud analysis, particularly for radar data, contains unique challenges because of the irregular geometry of point clouds. This letter proposes a novel transformer-based architecture for radar point clouds adapted to the graph signal processing (GSP) framework, designed to handle non-Euclidean and irregular signal structures. We provide experimental results by using well-established benchmarks on the nuScenes and RadarScenes datasets to validate our proposed method.Item Open Access Self-supervised MRI reconstruction with unrolled diffusion models(Springer Science and Business Media Deutschland GmbH, 2023) Korkmaz, Y.; Çukur, Tolga; Patel, V. M.Magnetic Resonance Imaging (MRI) produces excellent soft tissue contrast, albeit it is an inherently slow imaging modality. Promising deep learning methods have recently been proposed to reconstruct accelerated MRI scans. However, existing methods still suffer from various limitations regarding image fidelity, contextual sensitivity, and reliance on fully-sampled acquisitions for model training. To comprehensively address these limitations, we propose a novel self-supervised deep reconstruction model, named Self-Supervised Diffusion Reconstruction (SSDiffRecon). SSDiffRecon expresses a conditional diffusion process as an unrolled architecture that interleaves cross-attention transformers for reverse diffusion steps with data-consistency blocks for physics-driven processing. Unlike recent diffusion methods for MRI reconstruction, a self-supervision strategy is adopted to train SSDiffRecon using only undersampled k-space data. Comprehensive experiments on public brain MR datasets demonstrates the superiority of SSDiffRecon against state-of-the-art supervised, and self-supervised baselines in terms of reconstruction speed and quality. Implementation will be available at https://github.com/yilmazkorkmaz1/SSDiffRecon. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.Item Open Access SiMiD: similarity-based misinformation detection via communities on social media posts(IEEE, 2024-01-02) Özçelik, Oğuzhan; Toraman, C.; Can, FazlıSocial media users often find themselves exposed to similar viewpoints and tend to avoid contrasting opinions, particularly when connected within a community. In this study, we leverage the presence of communities in misinformation detection on social media. For this purpose, we propose a similarity-based method that utilizes user-follower interactions within a social network to identify and combat misinformation spread. The method first extracts important textual features of social media posts via contrastive learning and then measures the cosine similarity per social media post based on their relevance to each user in the community. Next, we train a classifier to assess the truthfulness of social media posts using these similarity scores. We evaluate our approach on three real-world datasets and compare our method with six baselines. The experimental results and statistical tests show that contrastive learning and leveraging communities can effectively enhance the detection of misinformation on social media.Item Open Access Unsupervised MRI reconstruction via zero-shot learned adversarial transformers(Institute of Electrical and Electronics Engineers Inc., 2022-01-27) Korkmaz, Yilmaz; Dar, Salman U.H.; Yurt, Mahmut; Özbey, Muzaffer; Çukur, TolgaSupervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.