Show simple item record

dc.contributor.authorPan J.-Y.en_US
dc.contributor.authorYang H.-J.en_US
dc.contributor.authorFaloutsos C.en_US
dc.contributor.authorDuygulu, Pınaren_US
dc.coverage.spatialWashington, DC, USAen_US
dc.date.accessioned2016-02-08T11:54:22Z
dc.date.available2016-02-08T11:54:22Z
dc.date.issued2004en_US
dc.identifier.issn2160-7508
dc.identifier.urihttp://hdl.handle.net/11693/27472
dc.descriptionDate of Conference: 27 June-2 July 2004en_US
dc.description.abstractGiven an image, how do we automatically assign keywords to it? In this paper, we propose a novel, graph-based approach (GCap) which outperforms previously reported methods for automatic image captioning. Moreover, it is fast and scales well, with its training and testing time linear to the data set size. We report auto-captioning experiments on the "standard" Corel image database of 680 MBytes, where GCap outperforms recent, successful auto-captioning methods by up to 10 percentage points in captioning accuracy (50% relative improvement). © 2004 IEEE.en_US
dc.language.isoEnglishen_US
dc.source.title2004 Conference on Computer Vision and Pattern Recognition Workshopen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/CVPR.2004.353en_US
dc.subjectComputer visionen_US
dc.subjectImage retrievalen_US
dc.subjectPattern recognitionen_US
dc.subjectStatistical testsen_US
dc.subjectAutomatic image captioningen_US
dc.subjectCorel image databaseen_US
dc.subjectData set sizeen_US
dc.subjectGraph-baseden_US
dc.subjectPercentage pointsen_US
dc.subjectTraining and testingen_US
dc.subjectGraphic methodsen_US
dc.titleGCap: Graph-based automatic image captioningen_US
dc.typeConference Paperen_US
dc.departmentDepartment of Computer Engineering
dc.identifier.doi10.1109/CVPR.2004.353en_US
dc.publisherIEEEen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record