Haber videolarında nesne tanıma ve otomatik etiketleme
Author
Baştan, Muhammet
Duygulu, Pınar
Date
2006-04Source Title
2006 IEEE 14th Signal Processing and Communications Applications Conference
Publisher
IEEE
Language
Turkish
Type
Conference PaperItem Usage Stats
153
views
views
127
downloads
downloads
Abstract
We propose a new approach to object recognition problem motivated by the availability of large annotated image and video collections. Similar to translation from one language to another, this approach considers the object recognition problem as the translation of visual elements to words. The visual elements represented in feature space are first categorized into a finite set of blobs. Then, the correspondences between the blobs and the words are learned using a method adapted from Statistical Machine Translation. Finally, the correspondences, in the form of a probability table, are used to predict words for particular image regions (region naming), for entire images (auto-annotation), or to associate the automatically generated speech transcript text with the correct video frames (video alignment). Experimental results are presented on TRECVID 2004 data set, which consists of about 150 hours of news videos associated with manual annotations and speech transcript text. © 2006 IEEE.
Keywords
Auto annotationStatistical Machine Translation
Video alignment
Video frames
Computational methods
Image coding
Multimedia services
Translation (languages)
Video streaming
Word processing
Object recognition