Haber videolarında nesne tanıma ve otomatik etiketleme

Date
2006-04
Advisor
Instructor
Source Title
2006 IEEE 14th Signal Processing and Communications Applications Conference
Print ISSN
Electronic ISSN
Publisher
IEEE
Volume
Issue
Pages
Language
Turkish
Type
Conference Paper
Journal Title
Journal ISSN
Volume Title
Abstract

We propose a new approach to object recognition problem motivated by the availability of large annotated image and video collections. Similar to translation from one language to another, this approach considers the object recognition problem as the translation of visual elements to words. The visual elements represented in feature space are first categorized into a finite set of blobs. Then, the correspondences between the blobs and the words are learned using a method adapted from Statistical Machine Translation. Finally, the correspondences, in the form of a probability table, are used to predict words for particular image regions (region naming), for entire images (auto-annotation), or to associate the automatically generated speech transcript text with the correct video frames (video alignment). Experimental results are presented on TRECVID 2004 data set, which consists of about 150 hours of news videos associated with manual annotations and speech transcript text. © 2006 IEEE.

Course
Other identifiers
Book Title
Keywords
Auto annotation, Statistical Machine Translation, Video alignment, Video frames, Computational methods, Image coding, Multimedia services, Translation (languages), Video streaming, Word processing, Object recognition
Citation
Published Version (Please cite this version)