Show simple item record

dc.contributor.authorPan J.-Y.en_US
dc.contributor.authorYang H.-J.en_US
dc.contributor.authorDuygulu, Pınaren_US
dc.contributor.authorFaloutsos, C.en_US
dc.coverage.spatialThe Grand Hotel, Taipei, Taiwanen_US
dc.date.accessioned2016-02-08T11:53:05Z
dc.date.available2016-02-08T11:53:05Z
dc.date.issued2004en_US
dc.identifier.urihttp://hdl.handle.net/11693/27427
dc.descriptionDate of Conference: June 27th~30th, 2004en_US
dc.description.abstractIn this paper, we examine the problem of automatic image captioning. Given a training set of captioned images, we want to discover correlations between image features and keywords, so that we can automatically find good keywords for a new image. We experiment thoroughly with multiple design alternatives on large datasets of various content styles, and our proposed methods achieve up to a 45% relative improvement on captioning accuracy over the state of the art.en_US
dc.language.isoEnglishen_US
dc.source.titleProceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME 2004)en_US
dc.subjectAlgorithmsen_US
dc.subjectAutomationen_US
dc.subjectContent based retrievalen_US
dc.subjectDatabase systemsen_US
dc.subjectIndexing (of information)en_US
dc.subjectSemanticsen_US
dc.subjectImage captioningen_US
dc.subjectImage databasesen_US
dc.subjectLatent semantic analysis (LSA)en_US
dc.subjectVideo indexingen_US
dc.subjectImage processingen_US
dc.titleAutomatic image captioningen_US
dc.typeConference Paperen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.citation.spage1987en_US
dc.citation.epage1990en_US
dc.citation.volumeNumber3en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record