Browsing by Subject "Automatic image captioning"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Automatic multimedia cross-modal correlation discovery(ACM, 2004-08) Pan, J.-Y.; Yang, H.-J.; Faloutsos, C.; Duygulu, PınarGiven an image (or video clip, or audio song), how do we automatically assign keywords to it? The general problem is to find correlations across the media in a collection of multimedia objects like video clips, with colors, and/or motion, and/or audio, and/or text scripts. We propose a novel, graph-based approach, "MMG", to discover such cross-modal correlations. Our "MMG" method requires no tuning, no clustering, no user-determined constants; it can be applied to any multi-media collection, as long as we have a similarity function for each medium; and it scales linearly with the database size. We report auto-captioning experiments on the "standard" Corel image database of 680 MB, where it outperforms domain specific, fine-tuned methods by up to 10 percentage points in captioning accuracy (50% relative improvement).Item Open Access GCap: Graph-based automatic image captioning(IEEE, 2004) Pan J.-Y.; Yang H.-J.; Faloutsos C.; Duygulu, PınarGiven an image, how do we automatically assign keywords to it? In this paper, we propose a novel, graph-based approach (GCap) which outperforms previously reported methods for automatic image captioning. Moreover, it is fast and scales well, with its training and testing time linear to the data set size. We report auto-captioning experiments on the "standard" Corel image database of 680 MBytes, where GCap outperforms recent, successful auto-captioning methods by up to 10 percentage points in captioning accuracy (50% relative improvement). © 2004 IEEE.