Browsing by Subject "Visual information"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Open Access Automatic tag expansion using visual similarity for photo sharing websites(Springer New York LLC, 2010) Sevil, S. G.; Kucuktunc, O.; Duygulu, P.; Can, F.In this paper we present an automatic photo tag expansion method designed for photo sharing websites. The purpose of the method is to suggest tags that are relevant to the visual content of a given photo at upload time. Both textual and visual cues are used in the process of tag expansion. When a photo is to be uploaded, the system asks for a couple of initial tags from the user. The initial tags are used to retrieve relevant photos together with their tags. These photos are assumed to be potentially content related to the uploaded target photo. The tag sets of the relevant photos are used to form the candidate tag list, and visual similarities between the target photo and relevant photos are used to give weights to these candidate tags. Tags with the highest weights are suggested to the user. The method is applied on Flickr (http://www.flickr. com ). Results show that including visual information in the process of photo tagging increases accuracy with respect to text-based methods. © 2009 Springer Science+Business Media, LLC.Item Open Access Bilkent University at TRECVID 2005(National Institute of Standards and Technology, 2005-11) Aksoy, Selim; Avcı, Akın; Balçık, Erman; Çavuş, Özge; Duygulu, Pınar; Karaman, Zeynep; Kavak, Pınar; Kaynak, Cihan; Küçükayvaz, Emre; Öcalan, Çağdaş; Yıldız, PınarWe describe our second-time participation, that includes one high-level feature extraction run, and three manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edgebased low-level features and textual information consisted of the speech transcript provided in the collection. With the experience gained with our second-time participation, we are in the process of building a system for automatic classification and indexing of video archives.Item Open Access Bilkent university at TRECVID 2006(National Institute of Standards and Technology, 2006-11) Aksoy, Selim; Duygulu, Pınar; Akçay, Hüseyin Gökhan; Ataer, Esra; Baştan, Muhammet; Can, Tolga; Çavuş, Özge; Doǧgrusöz, Emel; Gökalp, Demir; Akaydın, Ateş; Akoǧlu, Leman; Angın, Pelin; Cinbiş, R. Gökberk; Gür, Tunay; Ünlü, MehmetWe describe our third participation, that includes one high-level feature extraction run, and two manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edge-based low-level features and textual information consisted of the speech transcript provided in the collection.Item Open Access Bilkent University at TRECVID 2007(National Institute of Standards and Technology, 2007) Aksoy, Selim; Duygulu, Pınar; Aksoy, C.; Aydin, E.; Gunaydin, D.; Hadimli, K.; Koç L.; Olgun, Y.; Orhan, C.; Yakin G.We describe our fourth participation, that includes two high-level feature extraction runs, and one manual search run, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual information, consisting of color, texture and edge-based low-level features, was used.Item Open Access E-museum: web-based tour and information system for museums(IEEE, 2006) Baştanlar, Y.; Altıngövde, İsmail Şenol; Aksay, A.; Alav, O.; Çavuş, Özge; Yardımcı, Y.; Ulusoy, Ozgur; Güdükbay, Uğur; Çetin, A. Enis; Akar, G. B.; Aksoy, SelimA web-based system - consisting of data entrance, access and retrieval modules - is constructed for museums. Internet users that visit the e-museum, are able to view the written and visual information belonging to the artworks in the museum, are able to follow the virtual tour prepared for the different sections of the museum, are able to browse the artworks according to certain properties, are able to search the artworks having the similar visual content with the viewed artwork. © 2006 IEEE.Item Open Access MUCKE participation at retrieving diverse social images task of MediaEval 2013(CEUR-WS, 2013) Armağan, Anıl; Popescu, A.; Duygulu, PınarThe Mediaeval 2013 Retrieving Diverse Social Image Task addresses the challenge of improving both relevance and diversity of photos in a retrieval task on Flickr. We propose a clustering based technique that exploits both textual and visual information. We introduce a k-Nearest Neighbor (k-NN) inspired re-ranking algorithm that is applied before clustering to clean the dataset. After the clustering step, we exploit social cues to rank clusters by social relevance. From those ranked clusters images are retrieved according to their distance to cluster centroids.Item Open Access Toward an estimation of user tagging credibility for social image retrieval(ACM, 2014-11) Ginsca, A. L.; Popescu, A.; Ionescu, B.; Armağan, Anıl; Kanellos, I.Existing image retrieval systems exploit textual or/and visual information to return results. Retrieval is mostly focused on data themselves and disregards the data sources. In Web 2.0 platforms, the quality of annotations provided by different users can vary strongly. To account for this variability, we complement existing methods by introducing user tagging credibility in the retrieval process. Tagging credibility is automatically estimated by leveraging a large set of visual concept classifiers learned with Overfeat, a convolutional neural network (CNN) feature. A good image retrieval system should return results that are both relevant and diversified and here we tackle both challenges. Classically, we diversify results by using a k-Means algorithm and increase relevance by favoring images uploaded by users with good credibility estimates. Evaluation is performed on DIV400, a publicly available social image retrieval dataset and shows that our method is competitive with existing approaches.