Browsing by Subject "Wearable sensing"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Open Access Activity recognition invariant to position and orientation of wearable motion sensor units(2019-04) Yurtman, ArasWe propose techniques that achieve invariance to the placement of wearable motion sensor units in the context of human activity recognition. First, we focus on invariance to sensor unit orientation and develop three alternative transformations to remove from the raw sensor data the effect of the orientation at which the sensor unit is placed. The first two orientation-invariant transformations rely on the geometry of the measurements, whereas the third is based on estimating the orientations of the sensor units with respect to the Earth frame by exploiting the physical properties of the sensory data. We test them with multiple state-of-the-art machine-learning classifiers using five publicly available datasets (when applicable) containing various types of activities acquired by different sensor configurations. We show that the proposed methods achieve a similar accuracy with the reference system where the units are correctly oriented, whereas the standard system cannot handle incorrectly oriented sensors. We also propose a novel non-iterative technique for estimating the orientations of the sensor units based on the physical and geometrical properties of the sensor data to improve the accuracy of the third orientation-invariant transformation. All of the three transformations can be integrated into the pre-processing stage of existing wearable systems without much effort since we do not make any assumptions about the sensor configuration, the body movements, and the classification methodology. Secondly, we develop techniques that achieve invariance to the positioning of the sensor units in three ways: (1) We propose transformations that are applied on the sensory data to allow each unit to be placed at any position within a pre-determined body part. (2) We propose a transformation technique to allow the units to be interchanged so that the user does not need to distinguish between them before positioning. (3) We employ three different techniques to classify the activities based on a single sensor unit, whereas the training set may contain data acquired by multiple units placed at different positions. We combine (1) with (2) and also with (3) to achieve further robustness to sensor unit positioning. We evaluate our techniques on a publicly available dataset using seven state-of-the-art classifiers and show that the reduction in the accuracy is acceptable, considering the exibility, convenience, and unobtrusiveness in the positioning of the units. Finally, we combine the position- and orientation-invariant techniques to simultaneously achieve both. The accuracy values are much higher than those of random decision making although some of them are significantly lower than the reference system with correctly placed units. The trade-off between the exibility in sensor unit placement and the classification accuracy indicates that different approaches may be suitable for different applications.Item Open Access Activity recognition invariant to sensor orientation with wearable motion sensors(MDPI AG, 2017) Yurtman, A.; Barshan, B.Most activity recognition studies that employ wearable sensors assume that the sensors are attached at pre-determined positions and orientations that do not change over time. Since this is not the case in practice, it is of interest to develop wearable systems that operate invariantly to sensor position and orientation. We focus on invariance to sensor orientation and develop two alternative transformations to remove the effect of absolute sensor orientation from the raw sensor data. We test the proposed methodology in activity recognition with four state-of-the-art classifiers using five publicly available datasets containing various types of human activities acquired by different sensor configurations. While the ordinary activity recognition system cannot handle incorrectly oriented sensors, the proposed transformations allow the sensors to be worn at any orientation at a given position on the body, and achieve nearly the same activity recognition performance as the ordinary system for which the sensor units are not rotatable. The proposed techniques can be applied to existing wearable systems without much effort, by simply transforming the time-domain sensor data at the pre-processing stage. © 2017 by the authors. Licensee MDPI, Basel, Switzerland.Item Open Access Activity recognition invariant towearable sensor unit orientation using differential rotational transformations represented by quaternions(MDPI AG, 2018) Yurtman, Aras; Barshan, Billur; Fidan B.Wearable motion sensors are assumed to be correctly positioned and oriented in most of the existing studies. However, generic wireless sensor units, patient health and state monitoring sensors, and smart phones and watches that contain sensors can be differently oriented on the body. The vast majority of the existing algorithms are not robust against placing the sensor units at variable orientations. We propose a method that transforms the recorded motion sensor sequences invariantly to sensor unit orientation. The method is based on estimating the sensor unit orientation and representing the sensor data with respect to the Earth frame. We also calculate the sensor rotations between consecutive time samples and represent them by quaternions in the Earth frame. We incorporate our method in the pre-processing stage of the standard activity recognition scheme and provide a comparative evaluation with the existing methods based on seven state-of-the-art classifiers and a publicly available dataset. The standard system with fixed sensor unit orientations cannot handle incorrectly oriented sensors, resulting in an average accuracy reduction of 31.8%. Our method results in an accuracy drop of only 4.7% on average compared to the standard system, outperforming the existing approaches that cause an accuracy degradation between 8.4 and 18.8%. We also consider stationary and non-stationary activities separately and evaluate the performance of each method for these two groups of activities. All of the methods perform significantly better in distinguishing non-stationary activities, our method resulting in an accuracy drop of 2.1% in this case. Our method clearly surpasses the remaining methods in classifying stationary activities where some of the methods noticeably fail. The proposed method is applicable to a wide range of wearable systems to make them robust against variable sensor unit orientations by transforming the sensor data at the pre-processing stage.Item Open Access Classifying daily and sports activities invariantly to the positioning of wearable motion sensor units(IEEE, 2020) Barshan, Billur; Yurtman, ArasWe propose techniques that achieve invariance to the positioning of wearable motion sensor units on the body for the recognition of daily and sports activities. Using two sequence sets based on the sensory data allows each unit to be placed at any position on a given rigid body part. As the unit is shifted from its ideal position with larger displacements, the activity recognition accuracy of the system that uses these sequence sets degrades slowly, whereas that of the reference system (which is not designed to achieve position invariance) drops very fast. Thus, we observe a tradeoff between the flexibility in sensor unit positioning and the classification accuracy. The reduction in the accuracy is at acceptable levels, considering the convenience and flexibility provided to the user in the placement of the units. We compare the proposed approach with an existing technique to achieve position invariance and combine the former with our earlier methodology to achieve orientation invariance. We evaluate our proposed methodology on a publicly available data set of daily and sports activities acquired by wearable motion sensor units. The proposed representations can be integrated into the preprocessing stage of existing wearable systems without significant effort.Item Open Access Investigating inter-subject and inter-activity variations in activity recognition using wearable motion sensors(Oxford University Press, 2016) Barshan, B.; Yurtman, A.This work investigates inter-subject and inter-activity variability of a given activity dataset and provides some new definitions to quantify such variability. The definitions are sufficiently general and can be applied to a broad class of datasets that involve time sequences or features acquired using wearable sensors. The study is motivated by contradictory statements in the literature on the need for user-specific training in activity recognition. We employ our publicly available dataset that contains 19 daily and sports activities acquired from eight participants who wear five motion sensor units each. We pre-process recorded activity time sequences in three different ways and employ absolute, Euclidean and dynamic time warping distance measures to quantify the similarity of the recorded signal patterns. We define and calculate the average inter-subject and inter-activity distances with various methods based on the raw and pre-processed time-domain data as well as on the raw and pre-processed feature vectors. These definitions allow us to identify the subject who performs the activities in the most representative way and pinpoint the activities that show more variation among the subjects. We observe that the type of pre-processing used affects the results of the comparisons but that the different distance measures do not alter the comparison results as much. We check the consistency of our analysis and results by highlighting some of our activity recognition rates based on an exhaustive set of sensor unit, sensor type and subject combinations. We expect the results to be useful for dynamic sensor unit/type selection, for deciding whether to perform user-specific training and for designing more effective classifiers in activity recognition.Item Open Access Position invariance for wearables: interchangeability and single-unit usage via machine learning(IEEE, 2021) Yurtman, Aras; Barshan, Billur; Redif, S.We propose a new methodology to attain invariance to the positioning of body-worn motion-sensor units for recognizing everyday and sports activities. We first consider random interchangeability of the sensor units so that the user does not need to distinguish between them before wearing. To this end, we propose to use the compact singular value decomposition (SVD) that significantly reduces the accuracy degradation caused by random interchanging of the units. Secondly, we employ three variants of a generalized classifier that requires wearing only a single sensor unit on any one of the body parts to classify the activities. We combine both approaches with our previously developed methods to achieve invariance to both position and orientation, which ultimately allows the user significant flexibility in sensor-unit placement (position and orientation). We assess the performance of our proposed approach on a publicly available activity dataset recorded by body-worn motion-sensor units. Experimental results suggest that there is a tolerable reduction in accuracy, which is justified by the significant flexibility and convenience offered to users when placing the units.