Show simple item record

dc.contributor.advisorAksoy, Selim
dc.contributor.authorYalçınkaya, Özge
dc.date.accessioned2016-07-28T06:39:36Z
dc.date.available2016-07-28T06:39:36Z
dc.date.copyright2016-06
dc.date.issued2016-06
dc.date.submitted2016-07-25
dc.identifier.urihttp://hdl.handle.net/11693/30163
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (M.S.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2016.en_US
dc.descriptionIncludes bibliographical references (leaves 45-51).en_US
dc.description.abstractRecognition of actions from videos is a widely studied problem and there have been many solutions introduced over the years. Labeling of the training data that is required for classification has been an important bottleneck for scalability of these methods. On the other hand, utilization of large number of weakly-labeled web data continues to be a challenge due to the noisy content of the videos. In this study, we tackle the problem of eliminating irrelevant videos through pruning the collection and discovering the most representative elements. Motivated by the success of methods that discover the discriminative parts for image classification, we propose a novel video representation method that is based on selected distinctive exemplars. We call these discriminative exemplars as “prototypes” which are chosen from each action class separately to be representative for the class of interest. Then, we use these prototypes to describe the entire dataset. Following the traditional supervised classification methods and utilizing the available state-of-the-art low and deep-level features, we show that even with simple selection and representation methods, use of prototypes can increase the recognition performance. Moreover, by reducing the training data to the selected prototypes only, we show that less number of carefully selected examples could achieve the performance of a larger training data. In addition to prototypes, we explore the effect of irrelevant data elimination in action recognition and give the experimental results which are comparable to or better than the state-of-the-art studies on benchmark video datasets UCF-101 and ActivityNet.en_US
dc.description.statementofresponsibilityby Özge Yalçınkaya.en_US
dc.format.extentxiii, 51 leaves. : charts.en_US
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAction recognitionen_US
dc.subjectWeakly-labeled dataen_US
dc.subjectDiscriminative exemplarsen_US
dc.subjectVideo representationen_US
dc.subjectIterative noisy data eliminationen_US
dc.subjectFeature learningen_US
dc.titlePrototypes : exemplar based video representationen_US
dc.title.alternativePrototipler : örnek tabanlı video temsilien_US
dc.typeThesisen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.publisherBilkent Universityen_US
dc.description.degreeM.S.en_US
dc.identifier.itemidB153683


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record