Show simple item record

dc.contributor.advisorŞahin, Pınar Duygulu
dc.contributor.authorİşcen, Ahmet
dc.date.accessioned2016-01-08T18:28:07Z
dc.date.available2016-01-08T18:28:07Z
dc.date.issued2014
dc.identifier.urihttp://hdl.handle.net/11693/15983
dc.descriptionAnkara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2014.en_US
dc.descriptionThesis (Master's) -- Bilkent University, 2014.en_US
dc.descriptionIncludes bibliographical references leaves 47-51.en_US
dc.description.abstractAlthough understanding and analyzing human actions is a popular research topic in computer vision, most of the research has focused on recognizing ”ordinary” actions, such as walking and jumping. Extending these methods for more specific domains, such as assistive technologies, is not a trivial task. In most cases, these applications contain more fine-grained activities with low inter-class variance and high intra-class variance. In this thesis, we propose to use motion information from snippets, or small video intervals, in order to recognize actions from daily activities. Proposed method encodes the motion by considering the motion statistics, such as the variance and the length of trajectories. It also encodes the position information by using a spatial grid. We show that such approach is especially helpful for the domain of medical device usage, which contains actions with fast movements Another contribution that we propose is to model the sequential information of actions by the order in which they occur. This is especially useful for fine-grained activities, such as cooking activities, where the visual information may not be enough to distinguish between different actions. As for the visual perspective of the problem, we propose to combine multiple visual descriptors by weighing their confidence values. Our experiments show that, temporal sequence model and the fusion of multiple descriptors significantly improve the performance when used together.en_US
dc.description.statementofresponsibilityİşcen, Ahmeten_US
dc.format.extentxi, 51 leaves, illustrationsen_US
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAssistiveen_US
dc.subjectLivingen_US
dc.subjectSystemsen_US
dc.subjectActionen_US
dc.subjectActivityen_US
dc.subjectRecognitionen_US
dc.subject.lccTK7882.P7 I83 2014en_US
dc.subject.lcshHuman activity recognition.en_US
dc.subject.lcshImage analysis.en_US
dc.subject.lcshImage processing.en_US
dc.subject.lcshImage processing--Digital techniques.en_US
dc.titleActivity analysis for assistive systemsen_US
dc.typeThesisen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.publisherBilkent Universityen_US
dc.description.degreeM.S.en_US
dc.identifier.itemidB147911


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record