Browsing by Subject "Visual similarity"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Attributes2Classname: a discriminative model for attribute-based unsupervised zero-shot learning(IEEE, 2017-10) Demirel, B.; Cinbiş, Ramazan Gökberk; İkizler-Cinbiş, N.We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attributeclass relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets. © 2017 IEEE.Item Open Access Automatic tag expansion using visual similarity for photo sharing websites(Springer New York LLC, 2010) Sevil, S. G.; Kucuktunc, O.; Duygulu, P.; Can, F.In this paper we present an automatic photo tag expansion method designed for photo sharing websites. The purpose of the method is to suggest tags that are relevant to the visual content of a given photo at upload time. Both textual and visual cues are used in the process of tag expansion. When a photo is to be uploaded, the system asks for a couple of initial tags from the user. The initial tags are used to retrieve relevant photos together with their tags. These photos are assumed to be potentially content related to the uploaded target photo. The tag sets of the relevant photos are used to form the candidate tag list, and visual similarities between the target photo and relevant photos are used to give weights to these candidate tags. Tags with the highest weights are suggested to the user. The method is applied on Flickr (http://www.flickr. com ). Results show that including visual information in the process of photo tagging increases accuracy with respect to text-based methods. © 2009 Springer Science+Business Media, LLC.Item Open Access Re-ranking of web image search results using a graph algorithm(IEEE, 2008-12) Zitouni, Hilal; Sevil, Sare; Özkan, Derya; Duygulu, PınarWe propose a method to improve the results of image search engines on the Internet to satisfy users who desire to see relevant images in the first few pages. The method re-ranks the results of text based systems by incorporating visual similarity of the resulting images. We observe that, together with many unrelated ones, results of text based systems include a subset of correct images, and this set is, in general, the largest one which has the most similar images compared to other possible subsets. Based on this observation, we present similarities of all images in a graph structure, and find the densest component that corresponds to the largest set of most similar subset of images. Then, to re-rank the results, we give higher priority to the images in the densest component, and rank the others based on their similarities to the images in the densest component. The experiments are carried out on 18 category of images from [8]. © 2008 IEEE.Item Open Access Tag expansion methods for photo-sharing websites(2010) Sevil, Sare GülDue to the fast development of affordable digital cameras and the new trend of sharing media through the web, large amounts of images have become available on the Internet. Thus, at a time when a single site alone hosts over 4 billion photos, the necessity of managing these massive numbers of photos for efficient and effective browsing/searching operations has increased. To properly organize large amounts of data, systems have been using collaborative tagging methods by assigning descriptive words, tags, to data and performing text-based search and retrieval operations on these words. Unfortunately, due to various reasons, both the amount and quality of these tags assigned by users are low. In this work, we present and analyze two applications of tag expansion methods on photo-sharing websites. The purpose of these methods is to assist users for proper tagging at upload time. The goal of the approaches is not to give users a complete set of tags that could be directly used but to give a list of, possibly, incomplete set of tags that would help or guide the users to tag in accordance with the image content. With this assistance, problems such as incorrect tagging and insufficient tagging are expected to be solved.