Show simple item record

dc.contributor.advisorAksoy, Selim
dc.contributor.authorÖzdemir, Bahadır
dc.date.accessioned2016-01-08T18:14:35Z
dc.date.available2016-01-08T18:14:35Z
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/11693/15173
dc.descriptionAnkara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2010.en_US
dc.descriptionThesis (Master's) -- Bilkent University, 2010.en_US
dc.descriptionIncludes bibliographical references leaves 91-96.en_US
dc.description.abstractThe need for intelligent systems capable of automatic content extraction and classi cation in remote sensing image datasets, has been constantly increasing due to the advances in the satellite technology and the availability of detailed images with a wide coverage of the Earth. Increasing details in very high spatial resolution images obtained from new generation sensors have enabled new applications but also introduced new challenges for object recognition. Contextual information about the image structures has the potential of improving individual object detection. Therefore, identifying the image regions which are intrinsically heterogeneous is an alternative way for high-level understanding of the image content. These regions, also known as compound structures, are comprised of primitive objects of many diverse types. Popular representations such as the bag-of-words model use primitive object parts extracted using local operators but cannot capture their structure because of the lack of spatial information. Hence, the detection of compound structures necessitates new image representations that involve joint modeling of spectral, spatial and structural information. We propose an image representation that combines the representational power of graphs with the e ciency of the bag-of-words representation. The proposed method has three parts. In the rst part, every image in the dataset is transformed into a graph structure using the local image features and their spatial relationships. The transformation method rst detects the local patches of interest using maximally stable extremal regions obtained by gray level thresholding. Next, these patches are quantized to form a codebook of local information and a graph is constructed for each image by representing the patches as the graph nodes and connecting them with edges obtained using Voronoi tessellations. Transforming images to graphs provides an abstraction level and the remaining operations for the classi cation are made on graphs. The second part of the proposed method is a graph mining algorithm which nds a set of most important subgraphs for the classi cation of image graphs. The graph mining algorithm we propose rst nds the frequent subgraphs for each class, then selects the most discriminative ones by quantifying the correlations between the subgraphs and the classes in terms of the within-class occurrence distributions of the subgraphs; and nally reduces the set size by selecting the most representative ones by considering the redundancy between the subgraphs. After mining the set of subgraphs, each image graph is represented by a histogram vector of this set where each component in the histogram stores the number of occurrences of a particular subgraph in the image. The subgraph histogram representation enables classifying the image graphs using statistical classi ers. The last part of the method involves model learning from labeled data. We use support vector machines (SVM) for classifying images into semantic scene types. In addition, the themes distributed among the images are discovered using the latent Dirichlet allocation (LDA) model trained on the same data. By this way, the images which have heterogeneous content from di erent scene types can be represented in terms of a theme distribution vector. This representation enables further classi cation of images by theme analysis. The experiments using an Ikonos image of Antalya show the e ectiveness of the proposed representation in classi cation of complex scene types. The SVM model achieved a promising classi cation accuracy on the images cut from the Antalya image for the eight high-level semantic classes. Furthermore, the LDA model discovered interesting themes in the whole satellite image.en_US
dc.description.statementofresponsibilityÖzdemir, Bahadıren_US
dc.format.extentxvii, 96 leaves, illustrationsen_US
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectGraph-based scene analysisen_US
dc.subjectGraph miningen_US
dc.subjectScene understandingen_US
dc.subjectRemote sensing image analysisen_US
dc.subject.lccTA1632 .O93 2010en_US
dc.subject.lcshRemote sensing--Data processing.en_US
dc.subject.lcshImage processing.en_US
dc.subject.lcshComputer vision.en_US
dc.subject.lcshPattern recognition systems.en_US
dc.subject.lcshComputer graphics--Data processing.en_US
dc.titleStructural scene analysis of remotely sensed images using graph miningen_US
dc.typeThesisen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.publisherBilkent Universityen_US
dc.description.degreeM.S.en_US
dc.identifier.itemidB122247


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record