Browsing by Subject "Self-supervised learning"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Self-supervised dynamic MRI reconstruction(Springer, 2021-09-25) Acar, Mert; Çukur, Tolga; Öksüz, İlkayDeep learning techniques have recently been adopted for accelerating dynamic MRI acquisitions. Yet, common frameworks for model training rely on availability of large sets of fully-sampled MRI data to construct a ground-truth for the network output. This heavy reliance is undesirable as it is challenging to collect such large datasets in many applications, and even impossible for high spatiotemporal-resolution protocols. In this paper, we introduce self-supervised training to deep neural architectures for dynamic reconstruction of cardiac MRI. We hypothesize that, in the absence of ground-truth data, elevating complexity in self-supervised models can instead constrain model performance due to the deficiencies in training data. To test this working hypothesis, we adopt self-supervised learning on recent state-of-the-art deep models for dynamic MRI, with varying degrees of model complexity. Comparison of supervised and self-supervised variants of deep reconstruction models reveals that compact models have a remarkable advantage in reliability against performance loss in self-supervised settings.Item Open Access Self-supervised learning with graph neural networks for region of interest retrieval in histopathology(IEEE, 2021-05-05) Özen, Yiğit; Aksoy, Selim; Kösemehmetoğlu, Kemal; Önder, Sevgen; Üner, AyşegülDeep learning has achieved successful performance in representation learning and content-based retrieval of histopathology images. The commonly used setting in deep learning-based approaches is supervised training of deep neural networks for classification, and using the trained model to extract representations that are used for computing and ranking the distances between images. However, there are two remaining major challenges. First, supervised training of deep neural networks requires large amount of manually labeled data which is often limited in the medical field. Transfer learning has been used to overcome this challenge, but its success remained limited. Second, the clinical practice in histopathology necessitates working with regions of interest (ROI) of multiple diagnostic classes with arbitrary shapes and sizes. The typical solution to this problem is to aggregate the representations of fixed-sized patches cropped from these regions to obtain region-level representations. However, naive methods cannot sufficiently exploit the rich contextual information in the complex tissue structures. To tackle these two challenges, we propose a generic method that utilizes graph neural networks (GNN), combined with a self-supervised training method using a contrastive loss. GNN enables representing arbitrarily-shaped ROIs as graphs and encoding contextual information. Self-supervised contrastive learning improves quality of learned representations without requiring labeled data. The experiments using a challenging breast histopathology data set show that the proposed method achieves better performance than the state-of-the-art.Item Open Access Self-supervised MRI reconstruction with unrolled diffusion models(Springer Science and Business Media Deutschland GmbH, 2023) Korkmaz, Y.; Çukur, Tolga; Patel, V. M.Magnetic Resonance Imaging (MRI) produces excellent soft tissue contrast, albeit it is an inherently slow imaging modality. Promising deep learning methods have recently been proposed to reconstruct accelerated MRI scans. However, existing methods still suffer from various limitations regarding image fidelity, contextual sensitivity, and reliance on fully-sampled acquisitions for model training. To comprehensively address these limitations, we propose a novel self-supervised deep reconstruction model, named Self-Supervised Diffusion Reconstruction (SSDiffRecon). SSDiffRecon expresses a conditional diffusion process as an unrolled architecture that interleaves cross-attention transformers for reverse diffusion steps with data-consistency blocks for physics-driven processing. Unlike recent diffusion methods for MRI reconstruction, a self-supervision strategy is adopted to train SSDiffRecon using only undersampled k-space data. Comprehensive experiments on public brain MR datasets demonstrates the superiority of SSDiffRecon against state-of-the-art supervised, and self-supervised baselines in terms of reconstruction speed and quality. Implementation will be available at https://github.com/yilmazkorkmaz1/SSDiffRecon. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.Item Open Access Self-supervised representation learning with graph neural networks for region of interest analysis in breast histopathology(2020-12) Özen, YiğitDeep learning has made a major contribution to histopathology image analysis with representation learning outperforming hand-crafted features. However, two notable challenges remain. The first is the lack of large histopathology datasets. The commonly used setting in deep learning-based approaches is supervised training of deep and wide models using large labeled datasets. Manually labeling histopathology images is a time-consuming operation. Assembling a large public dataset is also proven difficult due to privacy concerns. Second, the clinical practice in histopathology necessitates working with regions of interest of multiple diagnostic classes with arbitrary shapes and sizes. The typical solution to this problem is to aggregate the representations of fixed-sized patches cropped from these regions to obtain region-level representations. However, naive methods cannot sufficiently exploit the rich contextual information in the complex tissue structures. To tackle these two challenges, this thesis proposes a generic method that utilizes graph neural networks, combined with a self-supervised training method using a contrastive loss function. The regions of interest are modeled as graphs where vertices are fixed-sized patches cropped from the region. The proposed method has two stages. The first stage is patch-level representation learning using convolutional neural networks which concentrates on cell-level features. The second stage is region-level representation learning using graph neural networks using vertex dropout augmentation. The experiments using a challenging breast histopathology dataset show that the proposed method achieves better performance than the state-of-the-art in both classification and retrieval tasks. networks which can learn the tissue structure. Graph neural networks enable representing arbitrarily-shaped regions as graphs and encoding contextual information through message passing between neighboring patches. Self-supervised contrastive learning improves quality of learned representations without requiring labeled data. We propose using self-supervised learning to train graph neural networks using vertex dropout augmentation. The experiments using a challenging breast histopathology dataset show that the proposed method achieves better performance than the state-of-the-art in both classification and retrieval tasks.