Combining textual and visual information for semantic labeling of images and videos

Date

2008

Advisor

Supervisor

Co-Advisor

Co-Supervisor

Instructor

Source Title

Print ISSN

1611-2482

Electronic ISSN

Publisher

Springer, Berlin, Heidelberg

Volume

Issue

Pages

205 - 225

Language

English

Journal Title

Journal ISSN

Volume Title

Series

Cognitive Technologies;

Abstract

Semantic labeling of large volumes of image and video archives is difficult, if not impossible, with the traditional methods due to the huge amount of human effort required for manual labeling used in a supervised setting. Recently, semi-supervised techniques which make use of annotated image and video collections are proposed as an alternative to reduce the human effort. In this direction, different techniques, which are mostly adapted from information retrieval literature, are applied to learn the unknown one-to-one associations between visual structures and semantic descriptions. When the links are learned, the range of application areas is wide including better retrieval and automatic annotation of images and videos, labeling of image regions as a way of large-scale object recognition and association of names with faces as a way of large-scale face recognition. In this chapter, after reviewing and discussing a variety of related studies, we present two methods in detail, namely, the so called “translation approach” which translates the visual structures to semantic descriptors using the idea of statistical machine translation techniques, and another approach which finds the densest component of a graph corresponding to the largest group of similar visual structures associated with a semantic description.

Course

Other identifiers

Book Title

Machine learning techniques for multimedia

Citation