BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Öztürk, Şaban"

Filter results by typing the first few letters
Now showing 1 - 7 of 7
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Adaptive diffusion priors for accelerated MRI reconstruction
    (Elsevier B.V., 2023-07-20) Güngör, Alper; Dar, Salman Ul Hassan; Öztürk, Şaban; Korkmaz, Yılmaz; Bedel, Hasan Atakan; Elmas, Gökberk; Özbey, Muzaffer; Çukur, Tolga
    Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance. © 2023 Elsevier B.V.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Content-based medical image retrieval with opponent class adaptive margin loss
    (Elsevier Inc., 2023-04-13) Öztürk, Şaban; Çelik, Emin; Çukur, Tolga
    The increasing utilization of medical imaging technology with digital storage capabilities has facilitated the compilation of large-scale data repositories. Fast access to image samples with similar appearance to suspected cases in these repositories can help establish a consulting system for healthcare professionals, and improve diagnostic procedures while minimizing processing delays. However, manual querying of large repositories is labor intensive. Content-based image retrieval (CBIR) offers an automated solution based on quantitative assessment of image similarity based on image features in a latent space. Since conventional methods based on hand-crafted features typically show poor generalization performance, learning-based CBIR methods have received attention recently. A common framework in this domain involves classifier-guided models that are trained to detect different image classes. Similarity assessments are then performed on the features captured by the intermediate stages of the trained models. While classifier-guided methods are powerful in inter-class discrimination, they are suboptimally sensitive to within-class differences in image features. An alternative framework instead performs task-agnostic training to learn an embedding space that enforces the representational discriminability of images. Within this representational-learning framework, a powerful method is triplet-wise learning that addresses the deficiencies of point-wise and pair-wise learning in characterizing the similarity relationships between image classes. However, the traditional triplet loss enforces separation between only a subset of image samples within the triplet via a manually-set constant margin value, so it can lead to suboptimal segregation of opponent classes and limited generalization performance. To address these limitations, we introduce a triplet-learning method for automated querying of medical image repositories based on a novel Opponent Class Adaptive Margin (OCAM) loss. To maintain optimally discriminative representations, OCAM considers relationships among all image pairs within the triplet and utilizes an adaptive margin value that is automatically selected per dataset and during the course of training iterations. CBIR performance of OCAM is compared against state-of-the-art loss functions for representational learning on three public databases (gastrointestinal disease, skin lesion, lung disease). On average, OCAM shows an mAP performance of 86.30% in the KVASIR dataset, 70.30% in the ISIC 2019 dataset, and 85.57% in the X-RAY dataset. Comprehensive experiments in each application domain demonstrate the superior performance of OCAM against competing triplet-wise methods at 1.52%, classifier-guided methods at 2.29%, and non-triplet representational-learning methods at 4.56%.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Deep clustering via center-oriented margin free-triplet loss for skin lesion detection in highly ımbalanced datasets
    (Institute of Electrical and Electronics Engineers Inc., 2022-06-29) Öztürk, Şaban; Çukur, Tolga
    Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings. © 2013 IEEE.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Focal modulation based end-to-end multi-label classification for chest X-ray image classification
    (IEEE - Institute of Electrical and Electronics Engineers, 2023-08-28) Öztürk, Şaban; Çukur, Tolga
    Chest X-ray imaging is of critical importance in order to effectively diagnose chest diseases, which are increasing today due to various environmental and hereditary factors. Although chest X-ray is the most commonly used device for detecting pathological abnormalities, it can be quite challenging for specialists due to misleading locations and sizes of pathological abnormalities, visual similarities, and complex backgrounds. Traditional deep learning (DL) architectures fall short due to relatively small areas of pathological abnormalities and similarities between diseased and healthy areas. In addition, DL structures with standard classification approaches are not ideal for dealing with problems involving multiple diseases. In order to overcome the aforementioned problems, firstly, background-independent feature maps were created using a conventional convolutional neural network (CNN). Then, the relationships between objects in the feature maps are made suitable for multi-label classification tasks using the focal modulation network (FMA), an innovative attention module that is more effective than the self-attention approach. Experiments using a Chest x-ray dataset containing both single and multiple labels for a total of 14 different diseases show that the proposed approach can provide superior performance for multi-label datasets.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Focal modulation network for lung segmentation in chest X-ray images
    (2023-08-09) Öztürk, Şaban; Çukur, Tolga
    Segmentation of lung regions is of key importance for the automatic analysis of Chest X-Ray (CXR) images, which have a vital role in the detection of various pulmonary diseases. Precise identification of lung regions is the basic prerequisite for disease diagnosis and treatment planning. However, achieving precise lung segmentation poses significant challenges due to factors such as variations in anatomical shape and size, the presence of strong edges at the rib cage and clavicle, and overlapping anatomical structures resulting from diverse diseases. Although commonly considered as the de-facto standard in medical image segmentation, the convolutional UNet architecture and its variants fall short in addressing these challenges, primarily due to the limited ability to model long-range dependencies between image features. While vision transformers equipped with self-attention mechanisms excel at capturing long-range relationships, either a coarse-grained global self-attention or a fine-grained local self-attention is typically adopted for segmentation tasks on high-resolution images to alleviate quadratic computational cost at the expense of performance loss. This paper introduces a focal modulation UNet model (FMN-UNet) to enhance segmentation performance by effectively aggregating fine-grained local and coarse-grained global relations at a reasonable computational cost. FMN-UNet first encodes CXR images via a convolutional encoder to suppress background regions and extract latent feature maps at a relatively modest resolution. FMN-UNet then leverages global and local attention mechanisms to model contextual relationships across the images. These contextual feature maps are convolutionally decoded to produce segmentation masks. The segmentation performance of FMN-UNet is compared against state-of-the-art methods on three public CXR datasets (JSRT, Montgomery, and Shenzhen). Experiments in each dataset demonstrate the superior performance of FMN-UNet against baselines.
  • Loading...
    Thumbnail Image
    ItemEmbargo
    HydraViT: adaptive multi-branch transformer for multi-label disease classification from Chest X-ray images
    (Elsevier, 2024-09-30) Öztürk, Şaban; Turalı, Mehmet Yiğit; Çukur, Tolga
    Chest X-ray is an essential diagnostic tool in the identification of chest diseases given its high sensitivity to pathological abnormalities in the lungs. However, image-driven diagnosis is still challenging due to heterogeneity in size and location of pathology, as well as visual similarities and co-occurrence of separate pathology. Since disease-related regions often occupy a relatively small portion of diagnostic images, classification models based on traditional convolutional neural networks (CNNs) are adversely affected given their locality bias. While CNNs were previously augmented with attention maps or spatial masks to guide focus on potentially critical regions, learning localization guidance under heterogeneity in the spatial distribution of pathology is challenging. To improve multi-label classification performance, here we propose a novel method, HydraViT, that synergistically combines a transformer backbone with a multi-branch output module with learned weighting. The transformer backbone enhances sensitivity to long-range context in X-ray images, while using the self-attention mechanism to adaptively focus on task-critical regions. The multi-branch output module dedicates an independent branch to each disease label to attain robust learning across separate disease classes, along with an aggregated branch across labels to maintain sensitivity to co-occurrence relationships among pathology. Experiments demonstrate that, on average, HydraViT outperforms competing attention- guided methods by 1.9% AUC and 5.3% MAE, region-guided methods by 2.1% AUC and 8.3% MAE, and semantic-guided methods by 2.0% AUC and 6.5% MAE in multi-label classification performance.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes
    (Elsevier, 2023-12) Dar, Salman Ul Hassan; Öztürk, Şaban; Özbey, Muzaffer; Oğuz, Kader Karlı; Çukur, Tolga
    Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback