Activity recognition invariant to position and orientation of wearable motion sensor units
Özaktaş, Billur Barshan
Item Usage Stats
MetadataShow full item record
We propose techniques that achieve invariance to the placement of wearable motion sensor units in the context of human activity recognition. First, we focus on invariance to sensor unit orientation and develop three alternative transformations to remove from the raw sensor data the effect of the orientation at which the sensor unit is placed. The first two orientation-invariant transformations rely on the geometry of the measurements, whereas the third is based on estimating the orientations of the sensor units with respect to the Earth frame by exploiting the physical properties of the sensory data. We test them with multiple state-of-the-art machine-learning classifiers using five publicly available datasets (when applicable) containing various types of activities acquired by different sensor configurations. We show that the proposed methods achieve a similar accuracy with the reference system where the units are correctly oriented, whereas the standard system cannot handle incorrectly oriented sensors. We also propose a novel non-iterative technique for estimating the orientations of the sensor units based on the physical and geometrical properties of the sensor data to improve the accuracy of the third orientation-invariant transformation. All of the three transformations can be integrated into the pre-processing stage of existing wearable systems without much effort since we do not make any assumptions about the sensor configuration, the body movements, and the classification methodology. Secondly, we develop techniques that achieve invariance to the positioning of the sensor units in three ways: (1) We propose transformations that are applied on the sensory data to allow each unit to be placed at any position within a pre-determined body part. (2) We propose a transformation technique to allow the units to be interchanged so that the user does not need to distinguish between them before positioning. (3) We employ three different techniques to classify the activities based on a single sensor unit, whereas the training set may contain data acquired by multiple units placed at different positions. We combine (1) with (2) and also with (3) to achieve further robustness to sensor unit positioning. We evaluate our techniques on a publicly available dataset using seven state-of-the-art classifiers and show that the reduction in the accuracy is acceptable, considering the exibility, convenience, and unobtrusiveness in the positioning of the units. Finally, we combine the position- and orientation-invariant techniques to simultaneously achieve both. The accuracy values are much higher than those of random decision making although some of them are significantly lower than the reference system with correctly placed units. The trade-off between the exibility in sensor unit placement and the classification accuracy indicates that different approaches may be suitable for different applications.
Human activity recognition
Embargo Lift Date2019-10-29
Showing items related by title, author, creator and subject.
Erden F.; Bingol, A.S.; Cetin, A.E. (IEEE Computer Society, 2014)In this paper, a hand gesture detection and classification system using two differential Pyro-electric Infrared (PIR) sensors and a camera is introduced. Motion presence is investigated in the area of interest using two ...
Yazar, A.; Enis Çetin, A. (2013)Intelligent ambient assisted living systems for elderly and handicapped people become affordable with the recent advances in computer and sensor technologies. In this paper, fall detection algorithm using multiple passive ...
Gholami, M.R.; Gezici, S.; Rydström, M.; Ström, E.G. (2010)The problem of positioning a target node is studied for wireless sensor networks with cooperative active and passive sensors. Two-way time-of-arrival and time-difference-of-arrival measurements made by both active and ...