Browsing by Subject "Scene classification"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Open Access Conceptfusion: A flexible scene classification framework(Springer, 2015-03-04) Saraç, Mustafa İlker; işcen, Ahmet; Gölge, Eren; Duygulu, PınarWe introduce ConceptFusion, a method that aims high accuracy in categorizing large number of scenes, while keeping the model relatively simpler and efficient for scalability. The proposed method combines the advantages of both low-level representations and high-level semantic categories, and eliminates the distinctions between different levels through the definition of concepts. The proposed framework encodes the perspectives brought through different concepts by considering them in concept groups that are ensembled for the final decision. Experiments carried out on benchmark datasets show the effectiveness of incorporating concepts in different levels with different perspectives. © Springer International Publishing Switzerland 2015.Item Open Access ConceptMap: mining noisy web data for concept learning(Springer, 2014-09) Gölge, Eren; Duygulu, PınarWe attack the problem of learning concepts automatically from noisy Web image search results. The idea is based on discovering common characteristics shared among subsets of images by posing a method that is able to organise the data while eliminating irrelevant instances. We propose a novel clustering and outlier detection method, namely Concept Map (CMAP). Given an image collection returned for a concept query, CMAP provides clusters pruned from outliers. Each cluster is used to train a model representing a different characteristics of the concept. The proposed method outperforms the state-of-the-art studies on the task of learning from noisy web data for low-level attributes, as well as high level object categories. It is also competitive with the supervised methods in learning scene concepts. Moreover, results on naming faces support the generalisation capability of the CMAP framework to different domains. CMAP is capable to work at large scale with no supervision through exploiting the available sources. © 2014 Springer International Publishing.Item Open Access The effect of task on cue usefulness for visual scene classification(Bilkent University, 2017-05) Karaca, MeltemDetecting objects in the environment is one of the most fundamental functions of the visual system. Humans are highly effective at this, and past studies have shown that we can process things like whether or not an animal is present in a scene within 150 msec. Different lines of research have also examined possible cues that may be useful for rapid object detection and scene classification, and have found things like color, luminance, shape and texture to be diagnostic. Studies examining the degree to which different cues are effective for detecting objects have found that shape and texture are the most important. However, it is unclear whether cue effectiveness depends on the task being employed. The discriminative information contained in different cues may vary depending on the task. This master’s thesis examines the effects of task-relevant information on which cues are most useful for visual detection. In order to investigate the impact of task type on visual cue usefulness, participants were asked to do animal and water detection tasks. They were presented with natural scenes that contain animals or water. We found significant differences in cue usefulness depending on the task. Corresponding differences were also found for reaction times based on the different cues. The results indicated that effectiveness of visual cues depends on the nature of the task, and different cues might be more or less useful when individuals are instructed to do different kinds of tasks.Item Open Access Mining web images for concept learning(Bilkent University, 2014-08) Golge, ErenWe attack the problem of learning concepts automatically from noisy Web image search results. The idea is based on discovering common characteristics shared among category images by posing two novel methods that are able to organise the data while eliminating irrelevant instances. We propose a novel clustering and outlier detection method, namely Concept Map (CMAP). Given an image collection returned for a concept query, CMAP provides clusters pruned from outliers. Each cluster is used to train a model representing a different characteristics of the concept. One another method is Association through Model Evolution (AME). It prunes the data in an iterative manner and it progressively finds better set of images with an evaluational score computed for each iteration. The idea is based on capturing discriminativeness and representativeness of each instance against large number of random images and eliminating the outliers. The final model is used for classification of novel images. These two methods are applied on different benchmark problems and we observed compelling or better results compared to state of art methods.Item Open Access Nearest-neighbor based metric functions for indoor scene recognition(Academic Press, 2011) Cakir, F.; Güdükbay, Uğur; Ulusoy, ÖzgürIndoor scene recognition is a challenging problem in the classical scene recognition domain due to the severe intra-class variations and inter-class similarities of man-made indoor structures. State-of-the-art scene recognition techniques such as capturing holistic representations of an image demonstrate low performance on indoor scenes. Other methods that introduce intermediate steps such as identifying objects and associating them with scenes have the handicap of successfully localizing and recognizing the objects in a highly cluttered and sophisticated environment. We propose a classification method that can handle such difficulties of the problem domain by employing a metric function based on the Nearest-Neighbor classification procedure using the bag-of-visual words scheme, the so-called codebooks. Considering the codebook construction as a Voronoi tessellation of the feature space, we have observed that, given an image, a learned weighted distance of the extracted feature vectors to the center of the Voronoi cells gives a strong indication of the image's category. Our method outperforms state-of-the-art approaches on an indoor scene recognition benchmark and achieves competitive results on a general scene dataset, using a single type of descriptor. © 2011 Elsevier Inc. All rights reserved.Item Open Access Scene classification with random forests and object and color distributions(IEEE, 2013) İşcen, Ahmet; Gölge, Eren; Armağan, Anıl; Duygulu, PınarWe propose a method to recognize the scene of an image by finding the objects and the colors it contains. We approach this problem by creating a binary vector of detected objects and a histogram of the colors that the image contains. We then use these features to train a random forest classifier in order to determine the scene of each image. For class-based classifiers, our method gives comparable results with the state of art methods, such as Object Bank method, for the indoor scene dataset that we used. Additionally, while well-known methods are computationally expensive, our method has a low computational cost. © 2013 IEEE.Item Open Access Semantic scene classification for content-based image retrieval(Bilkent University, 2008) Çavuş, ÖzgeContent-based image indexing and retrieval have become important research problems with the use of large databases in a wide range of areas. Because of the constantly increasing complexity of the image content, low-level features are no longer sufficient for image content representation. In this study, a content-based image retrieval framework that is based on scene classification for image indexing is proposed. First, the images are segmented into regions by using their color and line structure information. By using the line structures of the images the regions that do not consist of uniform colors such as man made structures are captured. After all regions are clustered, each image is represented with the histogram of the region types it contains. Both multi-class and one-class classification models are used with these histograms to obtain the probability of observing different semantic classes in each image. Since a single class with the highest probability is not sufficient to model image content in an unconstrained data set with a large number of semantically overlapping classes, the obtained probability values are used as a new representation of the images and retrieval is performed on these new representations. In order to minimize the semantic gap, a relevance feedback approach that is based on the support vector data description is also incorporated. Experiments are performed on both Corel and TRECVID datasets and successful results are obtained.