Browsing by Subject "Global descriptors"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Learning visual similarity for image retrieval with global descriptors and capsule networks(Springer, 2023-07-31) Durmuş, Duygu; Güdükbay, Uğur; Ulusoy, ÖzgürFinding matching images across large and unstructured datasets is vital in many computer vision applications. With the emergence of deep learning-based solutions, various visual tasks, such as image retrieval, have been successfully addressed. Learning visual similarity is crucial for image matching and retrieval tasks. Capsule Networks enable learning richer information that describes the object without losing the essential spatial relationship between the object and its parts. Besides, global descriptors are widely used for representing images. We propose a framework that combines the power of global descriptors and Capsule Networks by benefiting from the information of multiple views of images to enhance the image retrieval performance. The Spatial Grouping Enhance strategy, which enhances sub-features parallelly, and self-attention layers, which explore global dependencies within internal representations of images, are utilized to empower the image representations. The approach captures resemblances between similar images and differences between non-similar images using triplet loss and cost-sensitive regularized cross-entropy loss. The results are superior to the state-of-the-art approaches for the Stanford Online Products Database with Recall@K of 85.0, 94.4, 97.8, and 99.3, where K is 1, 10, 100, and 1000, respectively.Item Open Access Learning visual similarity for image retrieval with global descriptors and capsule networks(2021-07) Durmuş, DuyguFinding matching images across large and unstructured datasets plays an im-portant role in many computer vision applications. With the emergence of deep learning-based solutions, various visual tasks such as image retrieval have been successfully addressed. Learning visual similarity is crucial for image matching and retrieval tasks. An alternative deep learning architecture, named capsule networks, enables learning richer information that describes the object without losing the essential spatial relationship between the object and its parts. Besides, global descriptors are widely used for representing images. The proposed architecture combines the power of global descriptors and revised capsule networks to enhance image retrieval performance. It benefits from multi-ple views of object images and highlights the spatial relationship between objects and their parts. Spatial Grouping Enhance strategy, which enhances sub-features parallelly, and self-attention layers, which explore global dependencies within in-ternal representations of images, are utilized to empower the image representa-tions. The approach captures resemblances between similar images and di˙er-ences between the non-similar images using both triplet loss and cost-sensitive regularized cross-entropy loss instead of learning classification for individual im-ages. Based on the experiments, the results are superior to the state-of-the-art approaches for Stanford Online Products.Item Open Access Learning visual similarity for image retrieval with global descriptors and capsule networks(Springer New York LLC, 2024-02) Durmuş, Duygu; Güdükbay, Uğur; Ulusoy, ÖzgürFinding matching images across large and unstructured datasets is vital in many computer vision applications. With the emergence of deep learning-based solutions, various visual tasks, such as image retrieval, have been successfully addressed. Learning visual similarity is crucial for image matching and retrieval tasks. Capsule Networks enable learning richer information that describes the object without losing the essential spatial relationship between the object and its parts. Besides, global descriptors are widely used for representing images. We propose a framework that combines the power of global descriptors and Capsule Networks by benefiting from the information of multiple views of images to enhance the image retrieval performance. The Spatial Grouping Enhance strategy, which enhances sub-features parallelly, and self-attention layers, which explore global dependencies within internal representations of images, are utilized to empower the image representations. The approach captures resemblances between similar images and differences between non-similar images using triplet loss and cost-sensitive regularized cross-entropy loss. The results are superior to the state-of-the-art approaches for the Stanford Online Products Database with Recall@K of 85.0, 94.4, 97.8, and 99.3, where K is 1, 10, 100, and 1000, respectively.