Browsing by Subject "Vision transformer"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Embargo HydraViT: adaptive multi-branch transformer for multi-label disease classification from Chest X-ray images(Elsevier, 2024-09-30) Öztürk, Şaban; Turalı, Mehmet Yiğit; Çukur, TolgaChest X-ray is an essential diagnostic tool in the identification of chest diseases given its high sensitivity to pathological abnormalities in the lungs. However, image-driven diagnosis is still challenging due to heterogeneity in size and location of pathology, as well as visual similarities and co-occurrence of separate pathology. Since disease-related regions often occupy a relatively small portion of diagnostic images, classification models based on traditional convolutional neural networks (CNNs) are adversely affected given their locality bias. While CNNs were previously augmented with attention maps or spatial masks to guide focus on potentially critical regions, learning localization guidance under heterogeneity in the spatial distribution of pathology is challenging. To improve multi-label classification performance, here we propose a novel method, HydraViT, that synergistically combines a transformer backbone with a multi-branch output module with learned weighting. The transformer backbone enhances sensitivity to long-range context in X-ray images, while using the self-attention mechanism to adaptively focus on task-critical regions. The multi-branch output module dedicates an independent branch to each disease label to attain robust learning across separate disease classes, along with an aggregated branch across labels to maintain sensitivity to co-occurrence relationships among pathology. Experiments demonstrate that, on average, HydraViT outperforms competing attention- guided methods by 1.9% AUC and 5.3% MAE, region-guided methods by 2.1% AUC and 8.3% MAE, and semantic-guided methods by 2.0% AUC and 6.5% MAE in multi-label classification performance.Item Open Access Modeling spatial context in transformer-based whole slide image classification(2023-09) Erkan, CihanThe common method for histopathology image classification is to sample small patches from the large whole slide images and make predictions based on aggregations of patch representations. Transformer models provide a promising alternative with their ability to capture long-range dependencies of patches and their potential to detect representative regions, thanks to their novel self-attention strategy. However, as sequence-based architectures, transformers are unable to directly capture the two-dimensional nature of images. Modeling the spatial con-text of an image for a transformer requires two steps. In the first step the patches of the image are ordered as a 1-dimensional sequence, then the order information is injected to the model. However, commonly used spatial context modeling methods cannot accurately capture the distribution of the patches as they are designed to work on images with a fixed size. We propose novel spatial context modeling methods in an effort to make the model be aware of the spatial context of the patches as neighboring patches usually form diagnostically relevant structures. We achieve that by generating sequences that preserve the locality of the patches. We test the generated sequences by utilizing various information injection strategies. We evaluate the performance of the proposed transformer-based whole slide image classification framework on a lung dataset obtained from The Cancer Genome Atlas. Our experimental evaluations show that the proposed sequence generation method that utilizes space-filling curves to model the spatial context performs better than both baseline and state-of-the-art methods by achieving 87.6% accuracy.Item Open Access Space-filling curves for modeling spatial context in transformer-based whole slide image classification(SPIE, 2023-04-06) Erkan, Cihan; Aksoy, SelimThe common method for histopathology image classification is to sample small patches from large whole slide images and make predictions based on aggregations of patch representations. Transformer models provide a promising alternative with their ability to capture long-range dependencies of patches and their potential to detect representative regions, thanks to their novel self-attention strategy. However, as a sequence-based architecture, transformers are unable to directly capture the two-dimensional nature of images. While it is possible to get around this problem by converting an image into a sequence of patches in raster scan order, the basic transformer architecture is still insensitive to the locations of the patches in the image. The aim of this work is to make the model be aware of the spatial context of the patches as neighboring patches are likely to be part of the same diagnostically relevant structure. We propose a transformer-based whole slide image classification framework that uses space-filling curves to generate patch sequences that are adaptive to the variations in the shapes of the tissue structures. The goal is to preserve the locality of the patches so that neighboring patches in the one-dimensional sequence are closer to each other in the two-dimensional slide. We use positional encodings to capture the spatial arrangements of the patches in these sequences. Experiments using a lung cancer dataset obtained from The Cancer Genome Atlas show that the proposed sequence generation approach that best preserves the locality of the patches achieves 87.6% accuracy, which is higher than baseline models that use raster scan ordering (86.7% accuracy), no ordering (86.3% accuracy), and a model that uses convolutions to relate the neighboring patches (81.7% accuracy).