Haber videolarında nesne tanıma ve otomatik etiketleme

Date

2006-04

Editor(s)

Advisor

Supervisor

Co-Advisor

Co-Supervisor

Instructor

Source Title

2006 IEEE 14th Signal Processing and Communications Applications Conference

Print ISSN

Electronic ISSN

Publisher

IEEE

Volume

Issue

Pages

Language

Turkish

Journal Title

Journal ISSN

Volume Title

Series

Abstract

We propose a new approach to object recognition problem motivated by the availability of large annotated image and video collections. Similar to translation from one language to another, this approach considers the object recognition problem as the translation of visual elements to words. The visual elements represented in feature space are first categorized into a finite set of blobs. Then, the correspondences between the blobs and the words are learned using a method adapted from Statistical Machine Translation. Finally, the correspondences, in the form of a probability table, are used to predict words for particular image regions (region naming), for entire images (auto-annotation), or to associate the automatically generated speech transcript text with the correct video frames (video alignment). Experimental results are presented on TRECVID 2004 data set, which consists of about 150 hours of news videos associated with manual annotations and speech transcript text. © 2006 IEEE.

Course

Other identifiers

Book Title

Citation