Baştan, MuhammetDuygulu, Pınar2016-02-082016-02-082006-07http://hdl.handle.net/11693/27256Date of Conference: 13-15 July, 2006Conference name: 5th International Conference on Image and Video Retrieval. CIVR 2006: Image and Video RetrievalWe propose a new approach to recognize objects and scenes in news videos motivated by the availability of large video collections. This approach considers the recognition problem as the translation of visual elements to words. The correspondences between visual elements and words are learned using the methods adapted from statistical machine translation and used to predict words for particular image regions (region naming), for entire images (auto-annotation), or to associate the automatically generated speech transcript text with the correct video frames (video alignment). Experimental results are presented on TRECVID 2004 data set, which consists of about 150 hours of news videos associated with manual annotations and speech transcript text. The results show that the retrieval performance can be improved by associating visual and textual elements. Also, extensive analysis of features are provided and a method to combine features are proposed. © Springer-Verlag Berlin Heidelberg 2006.EnglishFeature extractionImage analysisMultimedia systemsSpeech recognitionStatistical methodsNews videosStatistical machine translationVideo collectionsVideo framesObject recognitionRecognizing objects and scenes in news videosConference Paper10.1007/11788034_39