Browsing by Author "Pan J.-Y."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Automatic image captioning(2004) Pan J.-Y.; Yang H.-J.; Duygulu, Pınar; Faloutsos, C.In this paper, we examine the problem of automatic image captioning. Given a training set of captioned images, we want to discover correlations between image features and keywords, so that we can automatically find good keywords for a new image. We experiment thoroughly with multiple design alternatives on large datasets of various content styles, and our proposed methods achieve up to a 45% relative improvement on captioning accuracy over the state of the art.Item Open Access GCap: Graph-based automatic image captioning(IEEE, 2004) Pan J.-Y.; Yang H.-J.; Faloutsos C.; Duygulu, PınarGiven an image, how do we automatically assign keywords to it? In this paper, we propose a novel, graph-based approach (GCap) which outperforms previously reported methods for automatic image captioning. Moreover, it is fast and scales well, with its training and testing time linear to the data set size. We report auto-captioning experiments on the "standard" Corel image database of 680 MBytes, where GCap outperforms recent, successful auto-captioning methods by up to 10 percentage points in captioning accuracy (50% relative improvement). © 2004 IEEE.Item Open Access Towards auto-documentary: Tracking the evolution of news stories(ACM, 2004) Duygulu, Pınar; Pan J.-Y.; Forsyth, D.A.News videos constitute an important source of information for tracking and documenting important events. In these videos, news stories are often accompanied by short video shots that tend to be repeated during the course of the event. Automatic detection of such repetitions is essential for creating auto-documentaries, for alleviating the limitation of traditional textual topic detection methods. In this paper, we propose novel methods for detecting and tracking the evolution of news over time. The proposed method exploits both visual cues and textual information to summarize evolving news stories. Experiments are carried on the TREC-VID data set consisting of 120 hours of news videos from two different channels.