Multimedia translation for linking visual data to semantics in videos
dc.citation.epage | 115 | en_US |
dc.citation.issueNumber | 1 | en_US |
dc.citation.spage | 99 | en_US |
dc.citation.volumeNumber | 22 | en_US |
dc.contributor.author | Duygulu, P. | en_US |
dc.contributor.author | Baştan M. | en_US |
dc.date.accessioned | 2016-02-08T09:54:52Z | |
dc.date.available | 2016-02-08T09:54:52Z | |
dc.date.issued | 2011-01 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description.abstract | The semantic gap problem, which can be referred to as the disconnection between low-level multimedia data and high-level semantics, is an important obstacle to build real-world multimedia systems. The recently developed methods that can use large volumes of loosely labeled data to provide solutions for automatic image annotation stand as promising approaches toward solving this problem. In this paper, we are interested in how some of these methods can be applied to semantic gap problems that appear in other application domains beyond image annotation. Specifically, we introduce new problems that appear in videos, such as the linking of keyframes with speech transcript text and the linking of faces with names. In a common framework, we formulate these problems as the problem of finding missing correspondences between visual and semantic data and apply the multimedia translation method. We evaluate the performance of the multimedia translation method on these problems and compare its performance against other auto-annotation and classifier-based methods. The experiments, carried out on over 300 h of news videos from TRECVid 2004 and TRECVid 2006 corpora, show that the multimedia translation method provides a performance that is comparable to the other auto-annotation methods and superior performance compared to other classifier-based methods. © 2009 Springer-Verlag. | en_US |
dc.description.provenance | Made available in DSpace on 2016-02-08T09:54:52Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2011 | en |
dc.identifier.doi | 10.1007/s00138-009-0217-8 | en_US |
dc.identifier.issn | 0932-8092 | |
dc.identifier.uri | http://hdl.handle.net/11693/22054 | |
dc.language.iso | English | en_US |
dc.publisher | Springer | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1007/s00138-009-0217-8 | en_US |
dc.source.title | Machine Vision & Applications: an international journal | en_US |
dc.subject | Machine translation | en_US |
dc.subject | Automatic speech recognition | en_US |
dc.subject | Visual data image | en_US |
dc.subject | Annotation visual content | en_US |
dc.title | Multimedia translation for linking visual data to semantics in videos | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Multimedia translation for linking visual data to semantics in videos.pdf
- Size:
- 1.45 MB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version