Browsing by Subject "Image classification"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Open Access Alignment of uncalibrated images for multi-view classification(IEEE, 2011) Arık, Sercan Ömer; Vuraf, E.; Frossard P.Efficient solutions for the classification of multi-view images can be built on graph-based algorithms when little information is known about the scene or cameras. Such methods typically require a pair-wise similarity measure between images, where a common choice is the Euclidean distance. However, the accuracy of the Euclidean distance as a similarity measure is restricted to cases where images are captured from nearby viewpoints. In settings with large transformations and viewpoint changes, alignment of images is necessary prior to distance computation. We propose a method for the registration of uncalibrated images that capture the same 3D scene or object. We model the depth map of the scene as an algebraic surface, which yields a warp model in the form of a rational function between image pairs. The warp model is computed by minimizing the registration error, where the registered image is a weighted combination of two images generated with two different warp functions estimated from feature matches and image intensity functions in order to provide robust registration. We demonstrate the flexibility of our alignment method by experimentation on several wide-baseline image pairs with arbitrary scene geometries and texture levels. Moreover, the results on multi-view image classification suggest that the proposed alignment method can be effectively used in graph-based classification algorithms for the computation of pairwise distances where it achieves significant improvements over distance computation without prior alignment. © 2011 IEEE.Item Open Access Classification of histopathological cancer stem cell images in h&e stained liver tissues(Bilkent University, 2016-03) Akbaş, Cem EmreMicroscopic images are an essential part of cancer diagnosis process in modern medicine. However, diagnosing tissues under microscope is a time-consuming task for pathologists. There is also a signi cant variation in pathologists' decisions on tissue labeling. In this study, we developed a computer-aided diagnosis (CAD) system that classi es and grades H&E stained liver tissue images for pathologists in order to speed up the cancer diagnosis process. This system is designed for H&E stained tissues, because it is cheaper than the conventional CD13 stain. The rst step is labeling the tissue images for classi cation purposes. CD13 stained tissue images are used to construct ground truth labels, because in H&E stained tissues cancer stem cells (CSC) cannot be observed by naked eye. Feature extraction is the next step. Since CSCs cannot be observed by naked eye in H&E stained tissues, we need to extract distinguishing texture features. For this purpose, 20 features are chosen from nine di erent color spaces. These features are fed into a modi ed version of Principal Component Analysis (PCA) algorithm, which is proposed in this thesis. This algorithm takes covariance matrices of feature matrices of images instead of pixel values of images as input. Images are compared in the eigenspace and classi es them according to the angle between them. It is experimentally shown that this algorithm can achieve 76.0% image classi cation accuracy in H&E stained liver tissues for a three-class classi cation problem. Scale invariant feature transform (SIFT), local binary patterns (LBP) and directional feature extraction algorithms are also utilized to classify and grade H&E stained liver tissues. It is observed in the experiments that these features do not provide meaningful information to grade H&E stained liver tissue images. Since our aim is to speed up the cancer diagnosis process, computationally e cient versions of proposed modi ed PCA algorithm are also proposed. Multiplication-free cosine-like similarity measures are employed in the modi ed PCA algorithm and it is shown that some versions of the multiplication-free similarity measure based modi ed PCA algorithm produces better classi cation accuracies than the standard modi ed PCA algorithm. One of the proposed multiplication-free similarity measures achieves 76.0% classi cation accuracy in our dataset containing 454 images of three classes.Item Open Access Graph walks for classification of histopathological images(IEEE, 2013) Olgun, Gülden; Sokmensuer, C.; Gündüz-Demir, ÇiğdemThis paper reports a new structural approach for automated classification of histopathological tissue images. It has two main contributions: First, unlike previous structural approaches that use a single graph for representing a tissue image, it proposes to obtain a set of subgraphs through graph walking and use these subgraphs in representing the image. Second, it proposes to characterize subgraphs by directly using distribution of their edges, instead of employing conventional global graph features, and use these characterizations in classification. Our experiments on colon tissue images reveal that the proposed structural approach is effective to obtain high accuracies in tissue image classification. © 2013 IEEE.Item Open Access Image classification with energy efficient hadamard neural networks(Bilkent University, 2018-01) Deveci, Tuba CerenDeep learning has made significant improvements at many image processing tasks in recent years, such as image classification, object recognition and object detection. Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection & recognition problems and computer vision tasks. However, the number of parameters in a CNN is too high such that the computers require more energy and larger memory size. In order to solve this problem, we investigate the energy efficient network models based on CNN architecture. In addition to previously studied energy efficient models such as Binary Weight Network (BWN), we introduce novel energy efficient models. Hadamard-transformed Image Network (HIN) is a variation of BWN, but uses compressed Hadamardtransformed images as input. Binary Weight and Hadamard-transformed Image Network (BWHIN) is developed by combining BWN and HIN as a new energy ef- ficient model. Performances of the neural networks with di erent parameters and di erent CNN architectures are compared and analyzed on MNIST and CIFAR-10 datasets. It is observed that energy efficiency is achieved with a slight sacrifice at classification accuracy. Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.Item Open Access Image mining using directional spatial constraints(Institute of Electrical and Electronics Engineers, 2010-01) Aksoy, S.; Cinbiş, R. G.Spatial information plays a fundamental role in building high-level content models for supporting analysts' interpretations and automating geospatial intelligence. We describe a framework for modeling directional spatial relationships among objects and using this information for contextual classification and retrieval. The proposed model first identifies image areas that have a high degree of satisfaction of a spatial relation with respect to several reference objects. Then, this information is incorporated into the Bayesian decision rule as spatial priors for contextual classification. The model also supports dynamic queries by using directional relationships as spatial constraints to enable object detection based on the properties of individual objects as well as their spatial relationships to other objects. Comparative experiments using high-resolution satellite imagery illustrate the flexibility and effectiveness of the proposed framework in image mining with significant improvements in both classification and retrieval performance.Item Open Access Land cover classification with multi-sensor fusion of partly missing data(American Society for Photogrammetry and Remote Sensing, 2009-05) Aksoy, S.; Koperski, K.; Tusk, C.; Marchisio, G.We describe a system that uses decision tree-based tools for seamless acquisition of knowledge for classification of remotely sensed imagery. We concentrate on three important problems in this process: information fusion, model understandability, and handling of missing data. Importance of multi-sensor information fusion and the use of decision tree classifiers for such problems have been well-studied in the literature. However, these studies have been limited to the cases where all data sources have a full coverage for the scene under consideration. Our contribution in this paper is to show how decision tree classifiers can be learned with alternative (surrogate) decision nodes and result in models that are capable of dealing with missing data during both training and classification to handle cases where one or more measurements do not exist for some locations. We present detailed performance evaluation regarding the effectiveness of these classifiers for information fusion and feature selection, and study three different methods for handling missing data in comparative experiments. The results show that surrogate decisions incorporated into decision tree classifiers provide powerful models for fusing information from different data layers while being robust to missing data. © 2009 American Society for Photogrammetry and Remote Sensing.Item Open Access Microscopic image classification via WT-based covariance descriptors using Kullback-Leibler distance(IEEE, 2012) Keskin, Furkan; Çetin, A. Enis; Erşahin, Tülin; Çetin-Atalay, RengülIn this paper, we present a novel method for classification of cancer cell line images using complex wavelet-based region covariance matrix descriptors. Microscopic images containing irregular carcinoma cell patterns are represented by randomly selected subwindows which possibly correspond to foreground pixels. For each subwindow, a new region descriptor utilizing the dual-tree complex wavelet transform coefficients as pixel features is computed. WT as a feature extraction tool is preferred primarily because of its ability to characterize singularities at multiple orientations, which often arise in carcinoma cell lines, and approximate shift invariance property. We propose new dissimilarity measures between covariance matrices based on Kullback-Leibler (KL) divergence and L 2-norm, which turn out to be as successful as the classical KL divergence, but with much less computational complexity. Experimental results demonstrate the effectiveness of the proposed image classification framework. The proposed algorithm outperforms the recently published eigenvalue-based Bayesian classification method. © 2012 IEEE.Item Open Access Multi-instance multi-label learning for whole slide breast histopathology(International Society for Optical Engineering SPIE, 2016-02-03) Mercan, Caner; Mercan, E.; Aksoy, Selim; Shapiro, L. G.; Weaver, D. L.; Elmore, J. G.Digitization of full biopsy slides using the whole slide imaging technology has provided new opportunities for understanding the diagnostic process of pathologists and developing more accurate computer aided diagnosis systems. However, the whole slide images also provide two new challenges to image analysis algorithms. The first one is the need for simultaneous localization and classification of malignant areas in these large images, as different parts of the image may have different levels of diagnostic relevance. The second challenge is the uncertainty regarding the correspondence between the particular image areas and the diagnostic labels typically provided by the pathologists at the slide level. In this paper, we exploit a data set that consists of recorded actions of pathologists while they were interpreting whole slide images of breast biopsies to find solutions to these challenges. First, we extract candidate regions of interest (ROI) from the logs of pathologists' image screenings based on different actions corresponding to zoom events, panning motions, and fixations. Then, we model these ROIs using color and texture features. Next, we represent each slide as a bag of instances corresponding to the collection of candidate ROIs and a set of slide-level labels extracted from the forms that the pathologists filled out according to what they saw during their screenings. Finally, we build classifiers using five different multi-instance multi-label learning algorithms, and evaluate their performances under different learning and validation scenarios involving various combinations of data from three expert pathologists. Experiments that compared the slide-level predictions of the classifiers with the reference data showed average precision values up to 62% when the training and validation data came from the same individual pathologist's viewing logs, and an average precision of 64% was obtained when the candidate ROIs and the labels from all pathologists were combined for each slide. © 2016 SPIE.Item Open Access Multi-resolution segmentation and shape analysis for remote sensing image classification(IEEE, 2005-06) Aksoy, Selim; Akçay H. GökhanWe present an approach for classification of remotely sensed imagery using spatial information extracted from multi-resolution approximations. The wavelet transform is used to obtain multiple representations of an image at different resolutions to capture different details inherently found in different structures. Then, pixels at each resolution are grouped into contiguous regions using clustering and mathematical morphology-based segmentation algorithms. The resulting regions are modeled using the statistical summaries of their spectral, textural and shape properties. These models are used to cluster the regions, and the cluster memberships assigned to each region in multiple resolution levels are used to classify the corresponding pixels into land cover/land use categories. Final classification is done using decision tree classifiers. Experiments with two ground truth data sets show the effectiveness of the proposed approach over traditional techniques that do not make strong use of region-based spatial information. © 2005 IEEE.Item Open Access A multiplication-free framework for signal processing and applications in biomedical image analysis(IEEE, 2013) Suhre, A.; Keskin F.; Ersahin, T.; Cetin-Atalay, R.; Ansari, R.; Cetin, A.E.A new framework for signal processing is introduced based on a novel vector product definition that permits a multiplier-free implementation. First a new product of two real numbers is defined as the sum of their absolute values, with the sign determined by product of the hard-limited numbers. This new product of real numbers is used to define a similar product of vectors in RN. The new vector product of two identical vectors reduces to a scaled version of the l1 norm of the vector. The main advantage of this framework is that it yields multiplication-free computationally efficient algorithms for performing some important tasks in signal processing. An application to the problem of cancer cell line image classification is presented that uses the notion of a co-difference matrix that is analogous to a covariance matrix except that the vector products are based on our new proposed framework. Results show the effectiveness of this approach when the proposed co-difference matrix is compared with a covariance matrix. © 2013 IEEE.Item Open Access Scene classification using bag-of-regions representations(IEEE, 2007-06) Gökalp, Demir; Aksoy, SelimThis paper describes our work on classification of outdoor scenes. First, images are partitioned into regions using one-class classification and patch-based clustering algorithms where one-class classifiers model the regions with relatively uniform color and texture properties, and clustering of patches aims to detect structures in the remaining regions. Next, the resulting regions are clustered to obtain a codebook of region types, and two models are constructed for scene representation: a "bag of individual regions" representation where each region is regarded separately, and a "bag of region pairs" representation where regions with particular spatial relationships are considered, together. Given these representations, scene classification is done using Bayesian classifiers. We also propose a novel region selection algorithm that identifies region types that are frequently found in a particular class of scenes but rarely exist in other classes, and also consistently occur together in the same class of scenes. Experiments on the LabelMe data set showed that the proposed models significantly out-perform a baseline global feature-based approach. © 2007 IEEE.Item Open Access Target detection and classification in SAR images using region covariance and co-difference(SPIE, 2009-04) Duman, Kaan; Eryıldırım, Abdulkadir; Çetin, A. EnisIn this paper, a novel descriptive feature parameter extraction method from synthetic aperture radar (SAR) images is proposed. The new approach is based on region covariance (RC) method which involves the computation of a covariance matrix whose entries are used in target detection and classification. In addition the region co-difference matrix is also introduced. Experimental results of object detection in MSTAR (moving and stationary target recognition) database are presented. The RC and region co-difference method delivers high detection accuracy and low false alarm rates. It is also experimentally observed that these methods produce better results than the commonly used principal component analysis (PCA) method when they are used with different distance metrics introduced. © 2009 SPIE.Item Open Access Two-tier tissue decomposition for histopathological image representation and classification(Institute of Electrical and Electronics Engineers, 2015) Gultekin, T.; Koyuncu, C. F.; Sokmensuer, C.; Gunduz Demir, C.In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.