Armağan, Anıl2016-01-082016-01-082014http://hdl.handle.net/11693/15992Cataloged from PDF version of article.Includes bibliographical references leaves 75-82.The large amount of video data shared on the web resulted in increased interest on retrieving videos using usual cues, since textual cues alone are not sufficient for satisfactory results. We address the problem of leveraging large scale image and video data for capturing important characteristics in videos. We focus on three different problems, namely finding common patterns in unusual videos, large scale multimedia event detection, and semantic indexing of videos. Unusual events are important as being possible indicators of undesired consequences. Discovery of unusual events in videos is generally attacked as a problem of finding usual patterns. With this challenging problem at hand, we propose a novel descriptor to encode the rapid motions in videos utilizing densely extracted trajectories. The proposed descriptor, trajectory snippet histograms, is used to distinguish unusual videos from usual videos, and further exploited to discover snapshots in which unusualness happen. Next, we attack the Multimedia Event Detection (MED) task. We approach this problem as representing the videos in the form of prototypes, that correspond to models each describing a different visual characteristic of a video shot. Finally, we approach the Semantic Indexing (SIN) problem, and collect web images to train models for each concept.xiv, 82 leaves, illistrations, graphicsEnglishinfo:eu-repo/semantics/openAccessLarge Scale Video RetrievalMultimedia Event DetectionUnusual VideosSemantic IndexingTK6680.5 .A75 2014Digital video.Multimedia systems.Leveraging large scale data for video retrievalThesisB135346