SiameseFuse: A computationally efficient and a not-so-deep network to fuse visible and infrared images

buir.contributor.authorEge, Mert
buir.contributor.authorÖzkanoglu, Mehmet Akif
buir.contributor.orcidEge, Mert|0000-0001-9060-290X
buir.contributor.orcidÖzkanoglu, Mehmet Akif|0000-0003-2581-9525
dc.citation.epage12-108712en_US
dc.citation.spage1-108712en_US
dc.citation.volumeNumber129en_US
dc.contributor.authorÖzer, S.
dc.contributor.authorEge, Mert
dc.contributor.authorÖzkanoglu, Mehmet Akif
dc.date.accessioned2023-02-15T10:17:36Z
dc.date.available2023-02-15T10:17:36Z
dc.date.issued2022-04-22
dc.departmentDepartment of Computer Engineeringen_US
dc.description.abstractRecent developments in pattern analysis have motivated many researchers to focus on developing deep learning based solutions in various image processing applications. Fusing multi-modal images has been one such application area where the interest is combining different information coming from different modalities in a more visually meaningful and informative way. For that purpose, it is important to first extract salient features from each modality and then fuse them as efficiently and informatively as possible. Recent literature on fusing multi-modal images reports multiple deep solutions that combine both visible (RGB) and infra-red (IR) images. In this paper, we study the performance of various deep solutions available in the literature while seeking an answer to the question: “Do we really need deeper networks to fuse multi-modal images?” To have an answer for that question, we introduce a novel architecture based on Siamese networks to fuse RGB (visible) images with infrared (IR) images and report the state-of-the-art results. We present an extensive analysis on increasing the layer numbers in the architecture with the above-mentioned question in mind to see if using deeper networks (or adding additional layers) adds significant performance in our proposed solution. We report the state-of-the-art results on visually fusing given visible and IR image pairs in multiple performance metrics, while requiring the least number of trainable parameters. Our experimental results suggest that shallow networks (as in our proposed solutions in this paper) can fuse both visible and IR images as well as the deep networks that were previously proposed in the literature (we were able to reduce the total number of trainable parameters up to 96.5%, compare 2,625 trainable parameters to the 74,193 trainable parameters).en_US
dc.description.provenanceSubmitted by Ezgi Uğurlu (ezgi.ugurlu@bilkent.edu.tr) on 2023-02-15T10:17:36Z No. of bitstreams: 1 SiameseFuse_A_computationally_efficient_and_a_not-so-deep_network_to_fuse_visible_and_infrared_images.pdf: 2082918 bytes, checksum: a6a17625af8684af18bbbdc26c3195a3 (MD5)en
dc.description.provenanceMade available in DSpace on 2023-02-15T10:17:36Z (GMT). No. of bitstreams: 1 SiameseFuse_A_computationally_efficient_and_a_not-so-deep_network_to_fuse_visible_and_infrared_images.pdf: 2082918 bytes, checksum: a6a17625af8684af18bbbdc26c3195a3 (MD5) Previous issue date: 2022-04-22en
dc.embargo.release2024-04-22
dc.identifier.doi10.1016/j.patcog.2022.108712en_US
dc.identifier.eissn1873-5142
dc.identifier.issn0031-3203
dc.identifier.urihttp://hdl.handle.net/11693/111322
dc.language.isoEnglishen_US
dc.publisherElsevier BVen_US
dc.relation.isversionofhttps://doi.org/10.1016/j.patcog.2022.108712en_US
dc.source.titlePattern Recognitionen_US
dc.subjectMulti-temporal fusionen_US
dc.subjectEfficient learningen_US
dc.subjectMulti-modal fusionen_US
dc.titleSiameseFuse: A computationally efficient and a not-so-deep network to fuse visible and infrared imagesen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SiameseFuse_A_computationally_efficient_and_a_not-so-deep_network_to_fuse_visible_and_infrared_images.pdf
Size:
1.99 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: