Multisource region attention network for fine-grained object recognition in remote sensing imagery

buir.contributor.authorSümbül, Gencer
buir.contributor.authorCinbiş, Ramazan Gökberk
buir.contributor.authorAksoy, Selim
dc.citation.epage4937en_US
dc.citation.issueNumber7en_US
dc.citation.spage4929en_US
dc.citation.volumeNumber57en_US
dc.contributor.authorSümbül, Genceren_US
dc.contributor.authorCinbiş, Ramazan Gökberken_US
dc.contributor.authorAksoy, Selimen_US
dc.date.accessioned2020-02-04T11:07:07Z
dc.date.available2020-02-04T11:07:07Z
dc.date.issued2019-07
dc.departmentDepartment of Computer Engineeringen_US
dc.description.abstractFine-grained object recognition concerns the identification of the type of an object among a large number of closely related subcategories. Multisource data analysis that aims to leverage the complementary spectral, spatial, and structural information embedded in different sources is a promising direction toward solving the fine-grained recognition problem that involves low between-class variance, small training set sizes for rare classes, and class imbalance. However, the common assumption of coregistered sources may not hold at the pixel level for small objects of interest. We present a novel methodology that aims to simultaneously learn the alignment of multisource data and the classification model in a unified framework. The proposed method involves a multisource region attention network that computes per-source feature representations, assigns attention scores to candidate regions sampled around the expected object locations by using these representations, and classifies the objects by using an attention-driven multisource representation that combines the feature representations and the attention scores from all sources. All components of the model are realized using deep neural networks and are learned in an end-to-end fashion. Experiments using RGB, multispectral, and LiDAR elevation data for classification of street trees showed that our approach achieved 64.2% and 47.3% accuracies for the 18-class and 40-class settings, respectively, which correspond to 13% and 14.3% improvement relative to the commonly used feature concatenation approach from multiple sources.en_US
dc.description.provenanceSubmitted by Evrim Ergin (eergin@bilkent.edu.tr) on 2020-02-04T11:07:07Z No. of bitstreams: 1 Multisource_Region_Attention_Network_for_Fine-Grained_Object_Recognition_in_Remote_Sensing_Imagery.pdf: 2796109 bytes, checksum: fd22e8be45bba52dd9b4fdfeb0479bf6 (MD5)en
dc.description.provenanceMade available in DSpace on 2020-02-04T11:07:07Z (GMT). No. of bitstreams: 1 Multisource_Region_Attention_Network_for_Fine-Grained_Object_Recognition_in_Remote_Sensing_Imagery.pdf: 2796109 bytes, checksum: fd22e8be45bba52dd9b4fdfeb0479bf6 (MD5) Previous issue date: 2019-07en
dc.identifier.doi10.1109/TGRS.2019.2894425en_US
dc.identifier.issn0196-2892
dc.identifier.urihttp://hdl.handle.net/11693/53050
dc.language.isoEnglishen_US
dc.publisherIEEEen_US
dc.relation.isversionofhttps://doi.org/10.1109/TGRS.2019.2894425en_US
dc.source.titleIEEE Transactions on Geoscience and Remote Sensingen_US
dc.subjectDeep learningen_US
dc.subjectFine-grained classificationen_US
dc.subjectImage alignmenten_US
dc.subjectMultisource classificationen_US
dc.subjectObject recognitionen_US
dc.titleMultisource region attention network for fine-grained object recognition in remote sensing imageryen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Multisource_Region_Attention_Network_for_Fine-Grained_Object_Recognition_in_Remote_Sensing_Imagery.pdf
Size:
2.67 MB
Format:
Adobe Portable Document Format
Description: