Duygulu, P.Baştan M.Forsyth, D.2019-02-112019-02-1120060302-9743http://hdl.handle.net/11693/49255We present a new approach to the object recognition problem, motivated by the recent availability of large annotated image and video collections. This approach considers object recognition as the translation of visual elements to words, similar to the translation of text from one language to another. The visual elements represented in feature space are categorized into a finite set of blobs. The correspondences between the blobs and the words are learned, using a method adapted from Statistical Machine Translation. Once learned, these correspondences can be used to predict words corresponding to particular image regions (region naming), to predict words associated with the entire images (autoannotation), or to associate the speech transcript text with the correct video frames (video alignment). We present our results on the Corel data set which consists of annotated images and on the TRECVID 2004 data set which consists of video frames associated with speech transcript text and manual annotations.EnglishMachine translationAutomatic speech recognitionNews videoStatistical machine translationCorrespondence problemTranslating images to words for recognizing objects in large image and video collectionsArticle10.1007/11957959_14