BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Semantic segmentation"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Deep learning for digital pathology
    (2020-11) Sarı, Can Taylan
    Histopathological examination is today’s gold standard for cancer diagnosis and grading. However, this task is time consuming and prone to errors as it requires detailed visual inspection and interpretation of a histopathological sample provided on a glass slide under a microscope by an expert pathologist. Low-cost and high-technology whole slide digital scanners produced in recent years have eliminated the disadvantages of physical glass slide samples by digitizing histopathological samples and relocating them to digital media. Digital pathology aims at alleviating the problems of traditional examination approaches by providing auxiliary computerized tools that quantitatively analyze digitized histopathological images. Traditional machine learning methods have proposed to extract handcrafted features from histopathological images and to use these features in the design of a classification or a segmentation algorithm. The performance of these methods mainly relies on the features that they use, and thus, their success strictly depends on the ability of these features to successfully quantify the histopathology domain. More recent studies have employed deep architectures to learn expressive and robust features directly from images avoiding complex feature extraction procedures of traditional approaches. Although deep learning methods perform well in many classification and segmentation problems, convolutional neural networks that they frequently make use of require annotated data for training and this makes it difficult to utilize unannotated data that cover the majority of the available data in the histopathology domain. This thesis addresses the challenges of traditional and deep learning approaches by incorporating unsupervised learning into classification and segmentation algorithms for feature extraction and training regularization purposes in the histopathology domain. As the first contribution of this thesis, the first study presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This study introduces a deep belief network to quantize the salient subregions, which are identified with domain-specific prior knowledge, by extracting a set of features directly learned on image data in an unsupervised way and uses the distribution of these quantizations for image representation and classification. As its second contribution, the second study proposes a new regularization method to train a fully convolutional network for semantic tissue segmentation in histopathological images. This study relies on the benefit of unsupervised learning, in the form of image reconstruction, for network training. To this end, it puts forward an idea of defining a new embedding, which is generated by superimposing an input image on its segmentation map, that allows uniting the main supervised task of semantic segmentation and an auxiliary unsupervised task of image reconstruction into a single one and proposes to learn this united task by a generative adversarial network. We compare our classification and segmentation methods with traditional machine learning methods and the state-of-the-art deep learning algorithms on various histopathological image datasets. Visual and quantitative results of our experiments demonstrate that the proposed methods are capable of learning robust features from histopathological images and provides more accurate results than their counterparts.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Partial convolution for padding, inpainting, and image synthesis
    (IEEE, 2022-09-26) Liu, Guilin; Dündar, Ayşegül; Shih, Kevin J.; Wang, Ting-Chun; Reda, Fitsum A.; Sapra, Karan; Yu, Zhiding; Yang, Xiaodong; Tao, Andrew; Catanzaro, Bryan
    Partial convolution weights convolutions with binary masks and renormalizes on valid pixels. It was originally proposed for image inpainting task because a corrupted image processed by a standard convolutional often leads to artifacts. Therefore, binary masks are constructed that define the valid and corrupted pixels, so that partial convolution results are only calculated based on valid pixels. It has been also used for conditional image synthesis task, so that when a scene is generated, convolution results of an instance depend only on the feature values that belong to the same instance. One of the unexplored applications for partial convolution is padding which is a critical component of modern convolutional networks. Common padding schemes make strong assumptions about how the padded data should be extrapolated. We show that these padding schemes impair model accuracy, whereas partial convolution based padding provides consistent improvements across a range of tasks. In this paper, we review partial convolution applications under one framework. We conduct a comprehensive study of the partial convolution based padding on a variety of computer vision tasks, including image classification, 3D-convolution-based action recognition, and semantic segmentation. Our results suggest that partial convolution-based padding shows promising improvements over strong baselines.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Segmentation of satellite SAR images using squeeze and attention based deep networks
    (2021-09) Khajei, Elmira
    Automatic extraction of objects of interests from high-resolution satellite images has been an active research area. Numerous recent papers have investigated on various deep learning-based semantic segmentation techniques for improved seg-mentation accuracy. Despite the fact that existing literature provides a wealth of information on land cover and land use (e.g., segmentation of structures, roads, and water area), the majority of them have been focused on segmentation on electro-optical-based (EO) images. A recent focus has been segmenting such ob-jects of interest in Synthetic-Aperture-Radar-based (SAR) images to overcome the limitations of using the visible spectrum. While the optical data taken at the visible spectrum is still widely preferred and used in many aerial applications, such applications typically need a clear sky and minimal cloud cover in order to function with high accuracy. SAR imaging is particularly useful as an alterna-tive imaging technique to alleviate such visibility-related problems such as when weather and cloud may obscure conventional optical sensors (as in during severe weather conditions and cloud cover). Recent segmentation techniques use multi-ple deep solutions based on U-Net. Recent attention based developments in deep learning when combined with the SAR image features, segmentation of objects of interests can be increased especially under low visibility conditions. In this thesis, a squeeze and attention based network is proposed for semantic segmentation in satellite SAR images. In particular, we show how squeeze and attention concept can be used within a U-Net based architecture for segmenting objects of interests in remote sensing images and study its performance on multiple public datasets. Our experiments demonstrate our proposed method yields superior results when compared to multiple baseline networks on all the used datasets.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback