Spatial techniques for image classification
1 - 28
Item Usage Stats
Signal and image processing for remote sensing
The amount of image data that is received from satellites is constantly increasing. For example, nearly 3 terabytes of data are being sent to Earth by NASA’s satellites every day . Advances in satellite technology and computing power have enabled the study of multi-modal, multi-spectral, multi-resolution, and multi-temporal data sets for applications such as urban land-use monitoring and management, GIS and mapping, environmental change, site suitability, and agricultural and ecological studies. Automatic content extraction, classification, and content-based retrieval have become highly desired goals for developing intelligent systems for effective and efficient processing of remotely sensed data sets. There is extensive literature on classification of remotely sensed imagery using para-metric or nonparametric statistical or structural techniques with many different features . Most of the previous approaches try to solve the content extraction problem by building pixel-based classification and retrieval models using spectral and textural features. However, a recent study  that investigated classification accuracies reported in the last 15 years showed that there has not been any significant improvement in the 492 Signal and Image Processing for Remote Sensing performance of classification methodologies over this period. The reason behind this problem is the large semantic gap between the low-level feature s used for classification and the high-level expectations and scenarios required by the users. This semantic gap makes a human expert’s involvement and interpretation in the final analysis inevitable, and this makes processing of data in large remote -sensing archive s practically impossible. Therefore, practical accessibility of large remotely sensed data archive s is currently limited to queries on geographical coordinates, time of acquisition, sensor type, and acquisition mode . The commonly used statistical classifiers model image content using distributions of pixels in spectral or other feature domains by assuming that similar land-cover and land-use structures will cluster together and behave similarly in these feature spaces. However, the assumptions for distribution models often do not hold for different kinds of data. Even when nonlinear tools such as neural networks or multi-classifier systems are used, the use of only pixel-based data often f ails expectations. An important element of understanding an image is the spatial information because complex land structures usually contain many pixels that have different feature characteristics. Remote-sensing experts also use spatial information to interpret the land-c over because pixels alone do not give much information about image content. Image segmentation techniques  automatically group neighboring pixels into contiguous regions based on similarity criteria on the pixels’ properties. Even though image segmentation has been heavily studied in image processing and computer vision fields, and despite the earl y effort s  that use spatial information for classification of remotely sensed imagery, segmentation algorithms have only recently started receiving emphasis in remote -sensing image analysis. Examples of image segmentation in the remote-sensing literature include region growing  and Marko v random field models  for segmentation of natural scenes, hierarchical segmentation for image mining , region growing for object-level change detection  and fuzzy rule-based classification , and boundary delineation of agricultural fields . We model spatial information by segmenting image s into spatially contiguous regions and classifying these regions according to the statistics of their spectral and textural properties and shape features. To develop segmentation algorithms that group pixels into regions, first, we use nonparametric Bayesian classifier s that create probabilistic links between low-level image features and high-level user -defined semantic land-c over and land-u se labels. Pix el-level characterization provides classification details for each pixel with automatic fusion of its spectral, textural, and other ancillary attributes  . Then, each resulting pix el-level classification map is converted into a set of contiguous region s using an iterative split-and-merge algorithm [13,14 ] and mathematical morphology. Following this segmentation process, resulting region s are modeled using the statistical sum maries of their spectral and textural properties along with shape features that are computed from region polygon boundaries [14,15 ]. Fin ally, nonparametric Bayesian classifiers are used with these region -level feature s that describe properties shared by groups of pixels to classify these groups into land-c over and land-u se categories define d by the user. The res t of the chapter is organize d as follow s. An overview of feature da ta used for modeling pixels is given in Section 22.2. Bayesian classifiers used for classifying these pixels are described in Section 22.3. Algorithms for segmentation of regions are presented in Section 22.4. Feature da ta used for modeling resulting regions are described in Section 22.5. Application of the Bayesian classifiers to region-level classification is described in Section 22.6. Experiment s are presented in Section 22.7 and conclusions are provided in Section 22.8.