Baştan, MuhammetDuygulu, Pınar2016-02-082016-02-082006-04http://hdl.handle.net/11693/27184Date of Conference: 17-19 April 2006Conference name: IEEE 14th Signal Processing and Communications Applications, 2006We propose a new approach to object recognition problem motivated by the availability of large annotated image and video collections. Similar to translation from one language to another, this approach considers the object recognition problem as the translation of visual elements to words. The visual elements represented in feature space are first categorized into a finite set of blobs. Then, the correspondences between the blobs and the words are learned using a method adapted from Statistical Machine Translation. Finally, the correspondences, in the form of a probability table, are used to predict words for particular image regions (region naming), for entire images (auto-annotation), or to associate the automatically generated speech transcript text with the correct video frames (video alignment). Experimental results are presented on TRECVID 2004 data set, which consists of about 150 hours of news videos associated with manual annotations and speech transcript text. © 2006 IEEE.TurkishAuto annotationStatistical Machine TranslationVideo alignmentVideo framesComputational methodsImage codingMultimedia servicesTranslation (languages)Video streamingWord processingObject recognitionHaber videolarında nesne tanıma ve otomatik etiketlemeObject recognition and auto-annotation in news videosConference Paper10.1109/SIU.2006.1659821