Learning bayesian classifiers for scene classification with a visual grammar

dc.citation.epage218en_US
dc.citation.spage212en_US
dc.contributor.authorAksoy, Selimen_US
dc.contributor.authorKoperski, K.en_US
dc.contributor.authorTusk, C.en_US
dc.contributor.authorMarchisio, G.en_US
dc.contributor.authorTilton J. C.en_US
dc.coverage.spatialGreenbelt, MD, USAen_US
dc.date.accessioned2016-02-08T11:51:58Z
dc.date.available2016-02-08T11:51:58Zen_US
dc.date.issued2005en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionConference name: IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003en_US
dc.descriptionDate of Conference: 27-28 Oct. 2003en_US
dc.description.abstractA challenging problem in image content extraction and classification is building a system that automatically learns high-level semantic interpretations of images. We describe a Bayesian framework for a visual grammar that aims to reduce the gap between low-level features and high-level user semantics. Our approach includes modeling image pixels using automatic fusion of their spectral, textural, and other ancillary attributes; segmentation of image regions using an iterative split-and-merge algorithm; and representing scenes by decomposing them into prototype regions and modeling the interactions between these regions in terms of their spatial relationships. Naive Bayes classifiers are used in the learning of models for region segmentation and classification using positive and negative examples for user-defined semantic land cover labels. The system also automatically learns representative region groups that can distinguish different scenes and builds visual grammar models. Experiments using Landsat scenes show that the visual grammar enables creation of high-level classes that cannot be modeled by individual pixels or regions. Furthermore, learning of the classifiers requires only a few training examples. © 2005 IEEE.en_US
dc.description.provenanceMade available in DSpace on 2016-02-08T11:51:58Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2005en_US
dc.identifier.doi10.1109/WARSD.2003.1295195en_US
dc.identifier.isbn0-7803-8350-8en_US
dc.identifier.urihttp://hdl.handle.net/11693/27387en_US
dc.language.isoEnglishen_US
dc.publisherIEEEen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/WARSD.2003.1295195en_US
dc.source.titleAdvances in Techniques for Analysis of Remotely Sensed Dataen_US
dc.subjectBayesian methodsen_US
dc.subjectLayouten_US
dc.subjectPrototypesen_US
dc.subjectNASAen_US
dc.subjectRemote sensingen_US
dc.subjectImage analysisen_US
dc.subjectPixelen_US
dc.subjectImage segmentationen_US
dc.subjectImage retrievalen_US
dc.subjectPostal servicesen_US
dc.titleLearning bayesian classifiers for scene classification with a visual grammaren_US
dc.typeConference Paperen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Learning_Bayesian_classifiers_for_a_visual_grammar.pdf
Size:
1.33 MB
Format:
Adobe Portable Document Format
Description: