Activity analysis for assistive systems
Author
İşcen, Ahmet
Advisor
Şahin, Pınar Duygulu
Date
2014Publisher
Bilkent University
Language
English
Type
ThesisItem Usage Stats
93
views
views
23
downloads
downloads
Abstract
Although understanding and analyzing human actions is a popular research topic in
computer vision, most of the research has focused on recognizing ”ordinary” actions,
such as walking and jumping. Extending these methods for more specific domains,
such as assistive technologies, is not a trivial task. In most cases, these applications
contain more fine-grained activities with low inter-class variance and high intra-class
variance.
In this thesis, we propose to use motion information from snippets, or small video
intervals, in order to recognize actions from daily activities. Proposed method encodes
the motion by considering the motion statistics, such as the variance and the length of
trajectories. It also encodes the position information by using a spatial grid. We show
that such approach is especially helpful for the domain of medical device usage, which
contains actions with fast movements
Another contribution that we propose is to model the sequential information of
actions by the order in which they occur. This is especially useful for fine-grained
activities, such as cooking activities, where the visual information may not be enough
to distinguish between different actions. As for the visual perspective of the problem,
we propose to combine multiple visual descriptors by weighing their confidence values.
Our experiments show that, temporal sequence model and the fusion of multiple
descriptors significantly improve the performance when used together.