Browsing by Subject "Digital pathology"
Now showing 1 - 20 of 20
Results Per Page
Sort Options
Item Open Access Automated cancer stem cell recognition in H and E stained tissue using convolutional neural networks and color deconvolution(SPIE, 2017) Aichinger, W.; Krappe, S.; Çetin, A. Enis; Çetin-Atalay, R.; Üner, A.; Benz, M.; Wittenberg, T.; Stamminger, M.; Münzenmayer, C.The analysis and interpretation of histopathological samples and images is an important discipline in the diagnosis of various diseases, especially cancer. An important factor in prognosis and treatment with the aim of a precision medicine is the determination of so-called cancer stem cells (CSC) which are known for their resistance to chemotherapeutic treatment and involvement in tumor recurrence. Using immunohistochemistry with CSC markers like CD13, CD133 and others is one way to identify CSC. In our work we aim at identifying CSC presence on ubiquitous Hematoxilyn and Eosin (HE) staining as an inexpensive tool for routine histopathology based on their distinct morphological features. We present initial results of a new method based on color deconvolution (CD) and convolutional neural networks (CNN). This method performs favorably (accuracy 0.936) in comparison with a state-of-the-art method based on 1DSIFT and eigen-analysis feature sets evaluated on the same image database. We also show that accuracy of the CNN is improved by the CD pre-processing.Item Open Access Deep convolutional network for tumor bud detection(Bilkent University, 2019-04) Koç, SonerThe existence of tumor buds is accepted as a promising biomarker for staging colorectal carcinomas. In the current practice of medicine, these tumor buds are detected by the manual examination of a immunohistochemically (IHC) stained tissue sample under a microscope. This manual examination is time-consuming as well as it may lead to inter-observer variability. In order to obtain fast and reproducible examinations, developing computational solutions has been becoming more and more important. With this motivation, this thesis presents a fully convolutional network design for the purpose of automatic tumor bud detection, for the rst time. This network design extends the U-net architecture by considering up-to-date learning mechanisms. These mechanisms include using residual connections in the encoder path, employing both ELU and ReLU activation functions in di erent layers of the network, training the network with a Tversky loss function, and combining outputs of di erent layers of the decoder path to reconstruct the nal segmentation map. Our experiments on 3295 image tiles taken from 23 whole slide images of IHC stained colorectal carcinomatous samples show that this extended version helps alleviate the vanishing gradient problem and those related with having a high class-imbalance dataset. And as a result, this network design yields better segmentation results compared with those of the two state-of-the-art networks.Item Open Access Deep feature representations and multi-instance multi-label learning of whole slide breast histopathology images(Bilkent University, 2019-03) Mercan, CanerThe examination of a tissue sample has traditionally involved a pathologist investigating the case under a microscope. Whole slide imaging technology has recently been utilized for the digitization of biopsy slides, replicating the microscopic examination procedure with the computer screen. This technology made it possible to scan the slides at very high resolutions, reaching up to 100; 000 100; 000 pixels. The advancements in the imaging technology has allowed the development of automated tools that could help reduce the workload of pathologists during the diagnostic process by performing analysis on the whole slide histopathology images. One of the challenges of whole slide image analysis is the ambiguity of the correspondence between the diagnostically relevant regions in a slide and the slide-level diagnostic labels in the pathology forms provided by the pathologists. Another challenge is the lack of feature representation methods for the variable number of variable-sized regions of interest (ROIs) in breast histopathology images as the state-of-the-art deep convolutional networks can only operate on fixed-sized small patches which may cause structural and contextual information loss. The last and arguably the most important challenge involves the clinical significance of breast histopathology, for the misdiagnosis or the missed diagnoses of a case may lead to unnecessary surgery, radiation or hormonal therapy. We address these challenges with the following contributions. The first contribution introduces the formulation of the whole slide breast histopathology image analysis problem as a multi-instance multi-label learning (MIMLL) task where a slide corresponds to a bag that is associated with the slide-level diagnoses provided by the pathologists, and the ROIs inside the slide correspond to the instances in the bag. The second contribution involves a novel feature representation method for the variable number of variable-sized ROIs using the activations of deep convolutional networks. Our final contribution includes a more advanced MIMLL formulation that can simultaneously perform multi-class slide-level classification and ROI-level inference. Through quantitative and qualitative experiments, we show that the proposed MIMLL methods are capable of learning from only slide-level information for the multi-class classification of whole slide breast histopathology images and the novel deep feature representations outperform the traditional features in fully supervised and weakly supervised settings.Item Open Access Deep feature representations for variable-sized regions of ınterest in breast histopathology(IEEE, 2021) Mercan, Caner; Aygüneş, Bulut; Aksoy, Selim; Mercan, Ezgi; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Objective: Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. Methods: First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. Results: Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. Conclusion: The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. Significance: The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.Item Open Access Deep learning for digital pathology(Bilkent University, 2020-11) Sarı, Can TaylanHistopathological examination is today’s gold standard for cancer diagnosis and grading. However, this task is time consuming and prone to errors as it requires detailed visual inspection and interpretation of a histopathological sample provided on a glass slide under a microscope by an expert pathologist. Low-cost and high-technology whole slide digital scanners produced in recent years have eliminated the disadvantages of physical glass slide samples by digitizing histopathological samples and relocating them to digital media. Digital pathology aims at alleviating the problems of traditional examination approaches by providing auxiliary computerized tools that quantitatively analyze digitized histopathological images. Traditional machine learning methods have proposed to extract handcrafted features from histopathological images and to use these features in the design of a classification or a segmentation algorithm. The performance of these methods mainly relies on the features that they use, and thus, their success strictly depends on the ability of these features to successfully quantify the histopathology domain. More recent studies have employed deep architectures to learn expressive and robust features directly from images avoiding complex feature extraction procedures of traditional approaches. Although deep learning methods perform well in many classification and segmentation problems, convolutional neural networks that they frequently make use of require annotated data for training and this makes it difficult to utilize unannotated data that cover the majority of the available data in the histopathology domain. This thesis addresses the challenges of traditional and deep learning approaches by incorporating unsupervised learning into classification and segmentation algorithms for feature extraction and training regularization purposes in the histopathology domain. As the first contribution of this thesis, the first study presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This study introduces a deep belief network to quantize the salient subregions, which are identified with domain-specific prior knowledge, by extracting a set of features directly learned on image data in an unsupervised way and uses the distribution of these quantizations for image representation and classification. As its second contribution, the second study proposes a new regularization method to train a fully convolutional network for semantic tissue segmentation in histopathological images. This study relies on the benefit of unsupervised learning, in the form of image reconstruction, for network training. To this end, it puts forward an idea of defining a new embedding, which is generated by superimposing an input image on its segmentation map, that allows uniting the main supervised task of semantic segmentation and an auxiliary unsupervised task of image reconstruction into a single one and proposes to learn this united task by a generative adversarial network. We compare our classification and segmentation methods with traditional machine learning methods and the state-of-the-art deep learning algorithms on various histopathological image datasets. Visual and quantitative results of our experiments demonstrate that the proposed methods are capable of learning robust features from histopathological images and provides more accurate results than their counterparts.Item Open Access Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks(Elsevier, 2018) Geçer, Barış; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists’ screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.Item Open Access From patch-level to ROI-level deep feature representations for breast histopathology classification(SPIE, 2019) Mercan, Caner; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.; Tomaszewski, J. E.; Ward, A. D.We propose a framework for learning feature representations for variable-sized regions of interest (ROIs) in breast histopathology images from the convolutional network properties at patch-level. The proposed method involves fine-tuning a pre-trained convolutional neural network (CNN) by using small fixed-sized patches sampled from the ROIs. The CNN is then used to extract a convolutional feature vector for each patch. The softmax probabilities of a patch, also obtained from the CNN, are used as weights that are separately applied to the feature vector of the patch. The final feature representation of a patch is the concatenation of the class-probability weighted convolutional feature vectors. Finally, the feature representation of the ROI is computed by average pooling of the feature representations of its associated patches. The feature representation of the ROI contains local information from the feature representations of its patches while encoding cues from the class distribution of the patch classification outputs. The experiments show the discriminative power of this representation in a 4-class ROI-level classification task on breast histopathology slides where our method achieved an accuracy of 66.8% on a data set containing 437 ROIs with different sizes.Item Open Access Graph convolutional networks for region of interest classification in breast histopathology(S P I E - International Society for Optical Engineering, 2021) Aygüneş, Bulut; Aksoy, Selim; Cinbiş, R.G.; Kösemehmetoğlu, K.; Önder, S.; Üner, A.Deep learning-based approaches have shown highly successful performance in the categorization of digitized biopsy samples. The commonly used setting in these approaches is to employ convolutional neural networks for classification of data sets consisting of images all having the same size. However, the clinical practice in breast histopathology necessitates multi-class categorization of regions of interest (ROI) in biopsy samples where these regions can have arbitrary shapes and sizes. The typical solution to this problem is to aggregate the classification results of fixed-sized patches cropped from these images to obtain image-level classification scores. Another limitation of these approaches is the independent processing of individual patches where the rich contextual information in the complex tissue structures has not yet been sufficiently exploited. We propose a generic methodology to incorporate local inter-patch context through a graph convolution network (GCN) that admits a graph-based ROI representation. The proposed GCN model aims to propagate information over neighboring patches in a progressive manner towards classifying the whole ROI into a diagnostic class. The experiments using a challenging data set for a 4-class ROI-level classification task and comparisons with several baseline approaches show that the proposed model that incorporates the spatial context by using graph convolutional layers performs better than commonly used fusion rules.Item Open Access Local object patterns for representation and classification of colon tissue images(Institute of Electrical and Electronics Engineers, 2014-07) Olgun, G.; Sokmensuer, C.; Gunduz Demir, C.This paper presents a new approach for the effective representation and classification of images of histopathological colon tissues stained with hematoxylin and eosin. In this approach, we propose to decompose a tissue image into its histological components and introduce a set of new texture descriptors, which we call local object patterns, on these components to model their composition within a tissue. We define these descriptors using the idea of local binary patterns, which quantify a pixel by constructing a binary string based on relative intensities of its neighbors. However, as opposed to pixel-level local binary patterns, we define our local object pattern descriptors at the component level to quantify a component. To this end, we specify neighborhoods with different locality ranges and encode spatial arrangements of the components within the specified local neighborhoods by generating strings. We then extract our texture descriptors from these strings to characterize histological components and construct the bag-of-words representation of an image from the characterized components. Working on microscopic images of colon tissues, our experiments reveal that the use of these component-level texture descriptors results in higher classification accuracies than the previous textural approaches. © 2013 IEEE.Item Open Access Local object patterns for tissue image representation and cancer classification(Bilkent University, 2013) Olgun, GüldenHistopathological examination of a tissue is the routine practice for diagnosis and grading of cancer. However, this examination is subjective since it requires visual interpretation of a pathologist, which mainly depends on his/her experience and expertise. In order to minimize the subjectivity level, it has been proposed to use automated cancer diagnosis and grading systems that represent a tissue image with quantitative features and use these features for classifying and grading the tissue. In this thesis, we present a new approach for effective representation and classification of histopathological tissue images. In this approach, we propose to decompose a tissue image into its histological components and introduce a set of new texture descriptors, which we call local object patterns, on these components to model their composition within a tissue. We define these descriptors using the idea of local binary patterns. However, we define our local object pattern descriptors at the component-level to quantify a component, as opposed to pixel-level local binary patterns, which quantify a pixel by constructing a binary string based on relative intensities of its neighbors. To this end, we specify neighborhoods with different locality ranges and encode spatial arrangements of the components within the specified local neighborhoods by generating strings. We then extract our texture descriptors from these strings to characterize histological components and construct the bag-of-words representation of an image from the characterized components. In this thesis, we use two approaches for the selection of the components: The first approach uses all components to construct a bag-ofwords representation whereas the second one uses graph walking to select multiple subsets of the components and constructs multiple bag-of-words representations from these subsets. Working with microscopic images of histopathological colon tissues, our experiments show that the proposed component-level texture descriptors lead to higher classification accuracies than the previous textural approaches.Item Open Access Localization of diagnostically relevant regions of interest in whole slide images: a comparative study(Springer New York LLC, 2016-08) Mercan, E.; Aksoy, S.; Shapiro, L. G.; Weaver, D. L.; Brunyé, T. T.; Elmore, J. G.Whole slide digital imaging technology enables researchers to study pathologists’ interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists’ actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors. © 2016, Society for Imaging Informatics in Medicine.Item Open Access Modeling spatial context in transformer-based whole slide image classification(Bilkent University, 2023-09) Erkan, CihanThe common method for histopathology image classification is to sample small patches from the large whole slide images and make predictions based on aggregations of patch representations. Transformer models provide a promising alternative with their ability to capture long-range dependencies of patches and their potential to detect representative regions, thanks to their novel self-attention strategy. However, as sequence-based architectures, transformers are unable to directly capture the two-dimensional nature of images. Modeling the spatial con-text of an image for a transformer requires two steps. In the first step the patches of the image are ordered as a 1-dimensional sequence, then the order information is injected to the model. However, commonly used spatial context modeling methods cannot accurately capture the distribution of the patches as they are designed to work on images with a fixed size. We propose novel spatial context modeling methods in an effort to make the model be aware of the spatial context of the patches as neighboring patches usually form diagnostically relevant structures. We achieve that by generating sequences that preserve the locality of the patches. We test the generated sequences by utilizing various information injection strategies. We evaluate the performance of the proposed transformer-based whole slide image classification framework on a lung dataset obtained from The Cancer Genome Atlas. Our experimental evaluations show that the proposed sequence generation method that utilizes space-filling curves to model the spatial context performs better than both baseline and state-of-the-art methods by achieving 87.6% accuracy.Item Open Access Multi-instance multi-label learning for multi-class classification of whole slide breast histopathology images(Institute of Electrical and Electronics Engineers, 2018) Mercan, C.; Aksoy, Selim; Mercan, E.; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Digital pathology has entered a new era with the availability of whole slide scanners that create the high-resolution images of full biopsy slides. Consequently, the uncertainty regarding the correspondence between the image areas and the diagnostic labels assigned by pathologists at the slide level, and the need for identifying regions that belong to multiple classes with different clinical significances have emerged as two new challenges. However, generalizability of the state-of-the-art algorithms, whose accuracies were reported on carefully selected regions of interest (ROIs) for the binary benign versus cancer classification, to these multi-class learning and localization problems is currently unknown. This paper presents our potential solutions to these challenges by exploiting the viewing records of pathologists and their slide-level annotations in weakly supervised learning scenarios. First, we extract candidate ROIs from the logs of pathologists' image screenings based on different behaviors, such as zooming, panning, and fixation. Then, we model each slide with a bag of instances represented by the candidate ROIs and a set of class labels extracted from the pathology forms. Finally, we use four different multi-instance multi-label learning algorithms for both slide-level and ROI-level predictions of diagnostic categories in whole slide breast histopathology images. Slide-level evaluation using 5-class and 14-class settings showed average precision values up to 81% and 69%, respectively, under different weakly labeled learning scenarios. ROI-level predictions showed that the classifier could successfully perform multi-class localization and classification within whole slide images that were selected to include the full range of challenging diagnostic categories.Item Open Access Multi-instance multi-label learning for whole slide breast histopathology(International Society for Optical Engineering SPIE, 2016-02-03) Mercan, Caner; Mercan, E.; Aksoy, Selim; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Digitization of full biopsy slides using the whole slide imaging technology has provided new opportunities for understanding the diagnostic process of pathologists and developing more accurate computer aided diagnosis systems. However, the whole slide images also provide two new challenges to image analysis algorithms. The first one is the need for simultaneous localization and classification of malignant areas in these large images, as different parts of the image may have different levels of diagnostic relevance. The second challenge is the uncertainty regarding the correspondence between the particular image areas and the diagnostic labels typically provided by the pathologists at the slide level. In this paper, we exploit a data set that consists of recorded actions of pathologists while they were interpreting whole slide images of breast biopsies to find solutions to these challenges. First, we extract candidate regions of interest (ROI) from the logs of pathologists' image screenings based on different actions corresponding to zoom events, panning motions, and fixations. Then, we model these ROIs using color and texture features. Next, we represent each slide as a bag of instances corresponding to the collection of candidate ROIs and a set of slide-level labels extracted from the forms that the pathologists filled out according to what they saw during their screenings. Finally, we build classifiers using five different multi-instance multi-label learning algorithms, and evaluate their performances under different learning and validation scenarios involving various combinations of data from three expert pathologists. Experiments that compared the slide-level predictions of the classifiers with the reference data showed average precision values up to 62% when the training and validation data came from the same individual pathologist's viewing logs, and an average precision of 64% was obtained when the candidate ROIs and the labels from all pathologists were combined for each slide. © 2016 SPIE.Item Open Access On the benefits of region of interest detection for whole slide image classification(SPIE, 2023-04-06) Korkut, Sena; Erkan, Cihan; Aksoy, Selim; Tomaszewski, John E.; Ward, Aaron D.Whole slide image (WSI) classification methods typically use fixed-size patches that are processed separately and are aggregated for the final slide-level prediction. Image segmentation methods are designed to obtain a delineation of specific tissue types. These two tasks are usually studied independently. The aim of this work is to investigate the effect of region of interest (ROI) detection as a preliminary step for WSI classification. First, we process each WSI by using a pixel-level classifier that provides a binary segmentation mask for potentially important ROIs. We evaluate both single-resolution models that process each magnification independently and multi-resolution models that simultaneously incorporate contextual information and local details. Then, we compare the WSI classification performances of patch-based models when the patches used for both training and testing are extracted from the whole image and when they are sampled from only within the detected ROIs. The experiments using a binary classification setting for breast histopathology slides as benign vs. malignant show that the classifier that uses the patches sampled from the whole image achieves an F1 score of 0.68 whereas the classifiers that use patches sampled from the ROI detection results produced by the single- and multi-resolution models obtain scores between 0.75 and 0.83.Item Open Access Self-supervised learning with graph neural networks for region of interest retrieval in histopathology(IEEE, 2021-05-05) Özen, Yiğit; Aksoy, Selim; Kösemehmetoğlu, Kemal; Önder, Sevgen; Üner, AyşegülDeep learning has achieved successful performance in representation learning and content-based retrieval of histopathology images. The commonly used setting in deep learning-based approaches is supervised training of deep neural networks for classification, and using the trained model to extract representations that are used for computing and ranking the distances between images. However, there are two remaining major challenges. First, supervised training of deep neural networks requires large amount of manually labeled data which is often limited in the medical field. Transfer learning has been used to overcome this challenge, but its success remained limited. Second, the clinical practice in histopathology necessitates working with regions of interest (ROI) of multiple diagnostic classes with arbitrary shapes and sizes. The typical solution to this problem is to aggregate the representations of fixed-sized patches cropped from these regions to obtain region-level representations. However, naive methods cannot sufficiently exploit the rich contextual information in the complex tissue structures. To tackle these two challenges, we propose a generic method that utilizes graph neural networks (GNN), combined with a self-supervised training method using a contrastive loss. GNN enables representing arbitrarily-shaped ROIs as graphs and encoding contextual information. Self-supervised contrastive learning improves quality of learned representations without requiring labeled data. The experiments using a challenging breast histopathology data set show that the proposed method achieves better performance than the state-of-the-art.Item Open Access Space-filling curves for modeling spatial context in transformer-based whole slide image classification(SPIE, 2023-04-06) Erkan, Cihan; Aksoy, SelimThe common method for histopathology image classification is to sample small patches from large whole slide images and make predictions based on aggregations of patch representations. Transformer models provide a promising alternative with their ability to capture long-range dependencies of patches and their potential to detect representative regions, thanks to their novel self-attention strategy. However, as a sequence-based architecture, transformers are unable to directly capture the two-dimensional nature of images. While it is possible to get around this problem by converting an image into a sequence of patches in raster scan order, the basic transformer architecture is still insensitive to the locations of the patches in the image. The aim of this work is to make the model be aware of the spatial context of the patches as neighboring patches are likely to be part of the same diagnostically relevant structure. We propose a transformer-based whole slide image classification framework that uses space-filling curves to generate patch sequences that are adaptive to the variations in the shapes of the tissue structures. The goal is to preserve the locality of the patches so that neighboring patches in the one-dimensional sequence are closer to each other in the two-dimensional slide. We use positional encodings to capture the spatial arrangements of the patches in these sequences. Experiments using a lung cancer dataset obtained from The Cancer Genome Atlas show that the proposed sequence generation approach that best preserves the locality of the patches achieves 87.6% accuracy, which is higher than baseline models that use raster scan ordering (86.7% accuracy), no ordering (86.3% accuracy), and a model that uses convolutions to relate the neighboring patches (81.7% accuracy).Item Open Access Two-tier tissue decomposition for histopathological image representation and classification(Institute of Electrical and Electronics Engineers, 2015) Gultekin, T.; Koyuncu, C. F.; Sokmensuer, C.; Gunduz Demir, C.In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.Item Open Access Unsupervised feature extraction via deep learning for histopathological classification of colon tissue images(Institute of Electrical and Electronics Engineers, 2019) Sarı, Can Taylan; Gündüz-Demir, ÇiğdemHistopathological examination is today’s gold standard for cancer diagnosis. However, this task is time consuming and prone to errors as it requires a detailed visual inspection and interpretation of a pathologist. Digital pathology aims at alleviating these problems by providing computerized methods that quantitatively analyze digitized histopathological tissue images. The performance of these methods mainly rely on features that they use, and thus, their success strictly depends on the ability of these features successfully quantifying the histopathology domain. With this motivation, this paper presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This feature extractor has three main contributions: First, it proposes to identify salient subregions in an image, based on domain-specific prior knowledge, and to quantify the image by employing only the characteristics of these subregions instead of considering the characteristics of all image locations. Second, it introduces a new deep learning based technique that quantizes the salient subregions by extracting a set of features directly learned on image data and uses the distribution of these quantizations for image representation and classification. To this end, the proposed deep learning based technique constructs a deep belief network of restricted Boltzmann machines (RBMs), defines the activation values of the hidden unit nodes in the final RBM as the features, and learns the quantizations by clustering these features in an unsupervised way. Third, this extractor is the first example of successfully using restricted Boltzmann machines in the domain of histopathological image analysis. Our experiments on microscopic colon tissue images reveal that the proposed feature extractor is effective to obtain more accurate classification results compared to its counterparts. IEEEItem Open Access Weakly supervised approaches for image classification in remote sensing and medical image analysis(Bilkent University, 2020-12) Aygüneş, BulutWeakly supervised learning (WSL) aims to utilize data with imprecise or noisy annotations to solve various learning problems. We study WSL approaches in two different domains: remote sensing and medical image analysis. For remote sensing, we focus on the multisource fine-grained object recognition problem that aims to classify an object into one of many similar subcategories. The task we work on involves images where an object with a given class label is present in the image without any knowledge of its exact location. We approach this problem from a WSL perspective and propose a method using a single-source deep instance attention model with parallel branches for joint localization and classification of objects. We then extend this model into a multisource setting where a reference source assumed to have no location uncertainty is used to aid the fusion of multiple sources. We show that all four proposed fusion strategies that operate at the probability level, logit level, feature level, and pixel level provide higher accuracies compared to the state-of-the-art. We also provide an in-depth comparison by evaluating each model at various parameter complexity settings, where the increased model capacity results in a further improvement over the default capacity setting. For medical image analysis, we study breast cancer classification on regions of interest (ROI) of arbitrary shapes and sizes from breast biopsy whole slides. The typical solution to this problem is to aggregate the classification results of fixed-sized patches cropped from ROIs to obtain image-level classification scores. We first propose a generic methodology to incorporate local inter-patch context through a graph convolution network (GCN) that aims to propagate information over neighboring patches in a progressive manner towards classifying the whole ROI. The experiments using a challenging data set for a 3-class ROI-level classification task and comparisons with several baseline approaches show that the proposed model that incorporates the spatial context performs better than commonly used fusion rules. Secondly, we revisit the WSL framework we use in our remote sensing experiments and apply it to a 4-class ROI classification problem. We propose a new training methodology tailored for this WSL task that combines the patches and labels from pairs of ROIs together to exploit the instance attention model’s capability to learn from samples with multiple labels, which results in superior performance over several baselines.