• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Computer Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Computer Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Learning bayesian classifiers for scene classification with a visual grammar

      Thumbnail
      View / Download
      2.9 Mb
      Author(s)
      Aksoy, Selim
      Koperski, K.
      Tusk, C.
      Marchisio, G.
      Tilton, J. C.
      Date
      2005-03
      Source Title
      IEEE Transactions on Geoscience and Remote Sensing
      Print ISSN
      0196-2892
      Electronic ISSN
      1558-0644
      Publisher
      IEEE
      Volume
      43
      Issue
      3
      Pages
      581 - 589
      Language
      English
      Type
      Article
      Item Usage Stats
      182
      views
      139
      downloads
      Abstract
      A challenging problem in image content extraction and classification is building a system that automatically learns high-level semantic interpretations of images. We describe a Bayesian framework for a visual grammar that aims to reduce the gap between low-level features and high-level user semantics. Our approach includes modeling image pixels using automatic fusion of their spectral, textural, and other ancillary attributes; segmentation of image regions using an iterative split-and-merge algorithm; and representing scenes by decomposing them into prototype regions and modeling the interactions between these regions in terms of their spatial relationships. Naive Bayes classifiers are used in the learning of models for region segmentation and classification using positive and negative examples for user-defined semantic land cover labels. The system also automatically learns representative region groups that can distinguish different scenes and builds visual grammar models. Experiments using Landsat scenes show that the visual grammar enables creation of high-level classes that cannot be modeled by individual pixels or regions. Furthermore, learning of the classifiers requires only a few training examples.
      Keywords
      Bayesian methods
      Layout
      Pixel
      Image segmentation
      Remote sensing
      Image retrieval
      Content based retrieval
      Remote monitoring
      Satellites
      NASA
      Permalink
      http://hdl.handle.net/11693/52677
      Published Version (Please cite this version)
      https://doi.org/10.1109/TGRS.2004.839547
      Collections
      • Department of Computer Engineering 1435
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartments

      My Account

      LoginRegister

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 1771
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy