Browsing by Subject "Motion sensors"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Embargo A new CNN-LSTM architecture for activity recognition employing wearable motion sensor data: enabling diverse feature extraction(Elsevier, 2023-06-28) Koşar, Enes; Barshan, BillurExtracting representative features to recognize human activities through the use of wearables is an area of on-going research. While hand-crafted features and machine learning (ML) techniques have been sufficiently well investigated in the past, the use of deep learning (DL) techniques is the current trend. Specifically, Convolutional Neural Networks (CNNs), Long Short Term Memory Networks (LSTMs), and hybrid models have been investigated. We propose a novel hybrid network architecture to recognize human activities through the use of wearable motion sensors and DL techniques. The LSTM and the 2D CNN branches of the model that run in parallel receive the raw signals and their spectrograms, respectively. We concatenate the features extracted at each branch and use them for activity recognition. We compare the classification performance of the proposed network with three single and three hybrid commonly used network architectures: 1D CNN, 2D CNN, LSTM, standard 1D CNN-LSTM, 1D CNN-LSTM proposed by Ordóñez and Roggen, and an alternative 1D CNN-LSTM model. We tune the hyper-parameters of six of the models using Bayesian optimization and test the models on two publicly available datasets. The comparison between the seven networks is based on four performance metrics and complexity measures. Because of the stochastic nature of DL algorithms, we provide the average values and standard deviations of the performance metrics over ten repetitions of each experiment. The proposed 2D CNN-LSTM architecture achieves the highest average accuracies of 95.66% and 92.95% on the two datasets, which are, respectively, 2.45% and 3.18% above those of the 2D CNN model that ranks the second. This improvement is a consequence of the proposed model enabling the extraction of a broader range of complementary features that comprehensively represent human activities. We evaluate the complexities of the networks in terms of the total number of parameters, model size, training/testing time, and the number of floating point operations (FLOPs). We also compare the results of the proposed network with those of recent related work that use the same datasets.Item Open Access A novel heuristic fall-detection algorithm based on double thresholding, fuzzy logic, and wearable motion sensor data(Institute of Electrical and Electronics Engineers, 2023-05-25) Barshan, Billur; Turan, M. S.We present a novel heuristic fall-detection algorithm based on combining double thresholding of two simple features with fuzzy logic techniques. We extract the features from the acceleration and gyroscopic data recorded from a waist-worn motion sensor unit. We compare the proposed algorithm to 15 state-of-the-art heuristic fall-detection algorithms in terms of five performance metrics and runtime on a vast benchmarking fall data set that is publicly available. The data set comprises recordings from 2880 short experiments (1600 fall and 1280 non-fall trials) with 16 participants. The proposed algorithm exhibits superior average accuracy (98.45%), sensitivity (98.31%), and F-measure (98.59%) performance metrics with a runtime that allows real-time operation. Besides proposing a novel heuristic fall-detection algorithm, this work has comparative value in that it provides a fair comparison on the relative performances of a considerably large number of existing heuristic algorithms with the proposed one, based on the same data set. The results of this research are encouraging in the development of fall-detection systems that can function in the real world for reliable and rapid fall detection.Item Open Access Activity recognition invariant to position and orientation of wearable motion sensor units(2019-04) Yurtman, ArasWe propose techniques that achieve invariance to the placement of wearable motion sensor units in the context of human activity recognition. First, we focus on invariance to sensor unit orientation and develop three alternative transformations to remove from the raw sensor data the effect of the orientation at which the sensor unit is placed. The first two orientation-invariant transformations rely on the geometry of the measurements, whereas the third is based on estimating the orientations of the sensor units with respect to the Earth frame by exploiting the physical properties of the sensory data. We test them with multiple state-of-the-art machine-learning classifiers using five publicly available datasets (when applicable) containing various types of activities acquired by different sensor configurations. We show that the proposed methods achieve a similar accuracy with the reference system where the units are correctly oriented, whereas the standard system cannot handle incorrectly oriented sensors. We also propose a novel non-iterative technique for estimating the orientations of the sensor units based on the physical and geometrical properties of the sensor data to improve the accuracy of the third orientation-invariant transformation. All of the three transformations can be integrated into the pre-processing stage of existing wearable systems without much effort since we do not make any assumptions about the sensor configuration, the body movements, and the classification methodology. Secondly, we develop techniques that achieve invariance to the positioning of the sensor units in three ways: (1) We propose transformations that are applied on the sensory data to allow each unit to be placed at any position within a pre-determined body part. (2) We propose a transformation technique to allow the units to be interchanged so that the user does not need to distinguish between them before positioning. (3) We employ three different techniques to classify the activities based on a single sensor unit, whereas the training set may contain data acquired by multiple units placed at different positions. We combine (1) with (2) and also with (3) to achieve further robustness to sensor unit positioning. We evaluate our techniques on a publicly available dataset using seven state-of-the-art classifiers and show that the reduction in the accuracy is acceptable, considering the exibility, convenience, and unobtrusiveness in the positioning of the units. Finally, we combine the position- and orientation-invariant techniques to simultaneously achieve both. The accuracy values are much higher than those of random decision making although some of them are significantly lower than the reference system with correctly placed units. The trade-off between the exibility in sensor unit placement and the classification accuracy indicates that different approaches may be suitable for different applications.Item Open Access Activity recognition invariant to sensor orientation with wearable motion sensors(MDPI AG, 2017) Yurtman, A.; Barshan, B.Most activity recognition studies that employ wearable sensors assume that the sensors are attached at pre-determined positions and orientations that do not change over time. Since this is not the case in practice, it is of interest to develop wearable systems that operate invariantly to sensor position and orientation. We focus on invariance to sensor orientation and develop two alternative transformations to remove the effect of absolute sensor orientation from the raw sensor data. We test the proposed methodology in activity recognition with four state-of-the-art classifiers using five publicly available datasets containing various types of human activities acquired by different sensor configurations. While the ordinary activity recognition system cannot handle incorrectly oriented sensors, the proposed transformations allow the sensors to be worn at any orientation at a given position on the body, and achieve nearly the same activity recognition performance as the ordinary system for which the sensor units are not rotatable. The proposed techniques can be applied to existing wearable systems without much effort, by simply transforming the time-domain sensor data at the pre-processing stage. © 2017 by the authors. Licensee MDPI, Basel, Switzerland.Item Open Access Activity recognition invariant towearable sensor unit orientation using differential rotational transformations represented by quaternions(MDPI AG, 2018) Yurtman, Aras; Barshan, Billur; Fidan B.Wearable motion sensors are assumed to be correctly positioned and oriented in most of the existing studies. However, generic wireless sensor units, patient health and state monitoring sensors, and smart phones and watches that contain sensors can be differently oriented on the body. The vast majority of the existing algorithms are not robust against placing the sensor units at variable orientations. We propose a method that transforms the recorded motion sensor sequences invariantly to sensor unit orientation. The method is based on estimating the sensor unit orientation and representing the sensor data with respect to the Earth frame. We also calculate the sensor rotations between consecutive time samples and represent them by quaternions in the Earth frame. We incorporate our method in the pre-processing stage of the standard activity recognition scheme and provide a comparative evaluation with the existing methods based on seven state-of-the-art classifiers and a publicly available dataset. The standard system with fixed sensor unit orientations cannot handle incorrectly oriented sensors, resulting in an average accuracy reduction of 31.8%. Our method results in an accuracy drop of only 4.7% on average compared to the standard system, outperforming the existing approaches that cause an accuracy degradation between 8.4 and 18.8%. We also consider stationary and non-stationary activities separately and evaluate the performance of each method for these two groups of activities. All of the methods perform significantly better in distinguishing non-stationary activities, our method resulting in an accuracy drop of 2.1% in this case. Our method clearly surpasses the remaining methods in classifying stationary activities where some of the methods noticeably fail. The proposed method is applicable to a wide range of wearable systems to make them robust against variable sensor unit orientations by transforming the sensor data at the pre-processing stage.Item Open Access Automated evaluation of physical therapy exercises using multi-template dynamic time warping on wearable sensor signals(Elsevier Ireland Ltd., 2014) Yurtman, A.; Barshan, B.We develop an autonomous system to detect and evaluate physical therapy exercises using wearable motion sensors. We propose the multi-template multi-match dynamic time warping (MTMM-DTW) algorithm as a natural extension of DTW to detect multiple occurrences of more than one exercise type in the recording of a physical therapy session. While allowing some distortion (warping) in time, the algorithm provides a quantitative measure of similarity between an exercise execution and previously recorded templates, based on DTW distance. It can detect and classify the exercise types, and count and evaluate the exercises as correctly/incorrectly performed, identifying the error type, if any. To evaluate the algorithm's performance, we record a data set consisting of one reference template and 10 test executions of three execution types of eight exercises performed by five subjects. We thus record a total of 120 and 1200 exercise executions in the reference and test sets, respectively. The test sequences also contain idle time intervals. The accuracy of the proposed algorithm is 93.46% for exercise classification only and 88.65% for simultaneous exercise and execution type classification. The algorithm misses 8.58% of the exercise executions and demonstrates a false alarm rate of 4.91%, caused by some idle time intervals being incorrectly recognized as exercise executions. To test the robustness of the system to unknown exercises, we employ leave-one-exercise-out cross validation. This results in a false alarm rate lower than 1%, demonstrating the robustness of the system to unknown movements. The proposed system can be used for assessing the effectiveness of a physical therapy session and for providing feedback to the patient. © 2014 Elsevier Ireland Ltd.Item Open Access Classification of fall directions via wearable motion sensors(Academic Press, 2022-06-15) Turan, M. Ş.; Barshan, BillurEffective fall-detection and classification systems are vital in mitigating severe medical and economical consequences of falls to people in the fall risk groups. One class of such systems is based on wearable sensors. While there is a vast amount of academic work on this class of systems, not much effort has been devoted to the investigation of effective and robust algorithms and like-for-like comparison of state-of-the-art algorithms using a sufficiently large dataset. In this article, fall-direction classification algorithms are presented and compared on an extensive dataset, comprising a total of 1600 fall trials. Eight machine learning classifiers are implemented for fall-direction classification into four basic directions (forward, backward, right, and left). These are, namely, Bayesian decision making (BDM), least squares method (LSM), k-nearest neighbor classifier (k-NN), artificial neural networks (ANNs), support vector machines (SVMs), decision-tree classifier (DTC), random forest (RF), and adaptive boosting or AdaBoost (AB). BDM achieves perfect classification, followed by k-NN, SVM, and RF. Data acquired from only a single motion sensor unit, worn at the waist of the subject, are processed for experimental verification. Four of the classifiers (BDM, LSM, k-NN, and ANN) are modified to handle the presence of data from an unknown class and evaluated on the same dataset. In this robustness analysis, ANN and k-NN yield accuracies above 96.2%. The results obtained in this study are promising in developing real-world fall-classification systems as they enable fast and reliable classification of fall directions.Item Open Access Detection and evaluation of physical therapy exercises by dynamic time warping using wearable motion sensor units(Springer, 2014) Yurtman, Aras; Barshan, BillurWe develop an autonomous system that detects and evaluates physical therapy exercises using wearable motion sensors. We propose an algorithm that detects all the occurrences of one or more template signals (representing exercise movements) in a long signal acquired during a physical therapy session. In matching the signals, the algorithm allows some distortion in time, based on dynamic time warping (DTW). The algorithm classifies the executions in one of the exercises and evaluates them as correct/incorrect, giving the error type if there is any. It also provides a quantitative measure of similarity between each matched execution and its template. To evaluate the performance of the algorithm in physical therapy, a dataset consisting of one template execution and ten test executions of each of the three execution types of eight exercises performed by five subjects is recorded, having a total of 120 and 1,200 exercise executions in the training and test sets, respectively, as well as many idle time intervals in the test signals. The proposed algorithm detects 1,125 executions in the whole test set. 8.58 % of the 1,200 executions are missed and 4.91 % of the idle time intervals are incorrectly detected as executions. The accuracy is 93.46 % only for exercise classification and 88.65 % for simultaneous exercise and execution type classification. The proposed system may be used for both estimating the intensity of the physical therapy session and evaluating the executions to provide feedback to the patient and the specialist.Item Open Access Fall detection and classification using wearable motion sensors(2017-08) Turan, Mustafa ŞahinEffective fall-detection systems are vital in mitigating severe medical and economical consequences of falls to people in the fall risk groups. One class of such systems is wearable sensor based fall-detection systems. While there is a vast amount of academic work on this class of systems, the literature still lacks effective and robust algorithms and comparative evaluation of state-of-the-art algorithms on a common basis, using an extensive dataset. In this thesis, falldetection and fall direction classification systems that use a motion sensor unit, worn at the waist of the subject, are presented. A comparison of a variety of falldetection algorithms on an extensive dataset, comprising a total of 2880 trials, is undertaken. A novel heuristic fall-detection algorithm (fuzzy-augmented double thresholding: FADoTh) using two simple features is proposed and compared to 15 state-of-the-art heuristic fall-detection algorithms, among which it displays the highest average accuracy (98:45%), sensitivity, and F-measure values. A learner version of the same algorithm (k-NN classifier-augmented tree: kAT) is developed and compared to eight machine learning (ML) classifiers based on the same dataset: Bayesian decision making (BDM), least squares method (LSM), k-nearest neighbor classifier (k-NN), artificial neural networks (ANN), support vector machines (SVM), decision tree classifier (DTC), random forest (RF), and adaptive boosting (AdaBoost). The kAT algorithm yields an average accuracy of 98:85% and performs on par with BDM, k-NN, ANN, SVM, DTC, RF, and AdaBoost, whereas LSM produces inferior results. Finally, the same eight ML classifiers are implemented for fall direction classification into four basic directions (forward, backward, right, and left) and evaluated on a reduced version of the same dataset consisting of only fall trials. BDM achieves perfect classification, followed by k-NN, SVM, and RF. BDM, LSM, k-NN, and ANN are modified to work in the presence of data from an unknown class and evaluated on the reduced dataset. In this robustness analysis, ANN and k-NN yield accuracies above 96:2%. The results obtained in this study are promising in developing real-world fall-detection systems.Item Open Access Fizik tedavi egzersizlerinin giyilebilir hareket algılayıcıları işaretlerinden dinamik zaman bükmesiyle sezimi ve değerlendirilmesi(IEEE, 2014-04) Yurtman, Aras; Barshan, BillurGiyilebilir hareket algılayıcılarından kaydedilen sinyalleri işleyerek fizik tedavi egzersizlerini algılamak ve değerlendirmek için özerk bir sistem geliştirilmiştir. Bir fizik tedavi seansındaki bir ya da birden fazla egzersiz tipini algılamak için, temeli dinamik zaman bükmesi (DZB) benzeşmezlik ölçütüne dayanan bir algoritma geliştirilmiştir. Algoritma, egzersizlerin doğru ya da yanlış yapıldığını değerlendirmekte ve varsa hata türünü saptamaktadır. Algoritmanın başarımını degerlendirmek için, beş katılımcı tarafından yapılan sekiz egzersiz hareketinin üç yürütüm türü için birer şablon ve 10’ar sınama yürütümünden oluşan bir veri kümesi kaydedilmiştir. Dolayısıyla, eğitim ve sınama kümelerinde sırasıyla 120 ve 1,200 egzersiz yürütümü bulunmaktadır. Sınama kümesi, boş zaman dilimleri de içermektedir. Öne sürülen algoritma, sınama kümesindeki 1,200 yürütümün % 8.58’ini kaçırmakta ve boş zaman dilimlerinin % 4.91’ini yanlış sezim olarak değerlendirerek toplam 1,125 yürütüm algılamaktadır. Doğruluk, sadece egzersiz sınıflandırması ele alındığında ˘ % 93.46, hem egzersiz hem de yürütüm türü sınıflandırması içinse % 88.65’tir. Sistemin bilinmeyen egzersizlere karşı davranışını sınamak için, algoritma, her egzersiz için, o egzersizin şablonları dışarıda bırakılarak çalıştırılmış ve 1,200 egzersizin sadece 10’u yanlış sezilmiştir. Bu sonuç, sistemin bilinmeyen hareketlere karşı gürbüz olduğunu göstermektedir. Öne sürülen sistem, hem bir fizik tedavi seansının yoğunluğunu kestirmek, hem de hastaya ve fizik tedavi uzmanına geribildirim vermek amacıyla egzersiz hareketlerini değerlendirmek için kullanılabilir.Item Open Access Investigating inter-subject and inter-activity variations in activity recognition using wearable motion sensors(Oxford University Press, 2016) Barshan, B.; Yurtman, A.This work investigates inter-subject and inter-activity variability of a given activity dataset and provides some new definitions to quantify such variability. The definitions are sufficiently general and can be applied to a broad class of datasets that involve time sequences or features acquired using wearable sensors. The study is motivated by contradictory statements in the literature on the need for user-specific training in activity recognition. We employ our publicly available dataset that contains 19 daily and sports activities acquired from eight participants who wear five motion sensor units each. We pre-process recorded activity time sequences in three different ways and employ absolute, Euclidean and dynamic time warping distance measures to quantify the similarity of the recorded signal patterns. We define and calculate the average inter-subject and inter-activity distances with various methods based on the raw and pre-processed time-domain data as well as on the raw and pre-processed feature vectors. These definitions allow us to identify the subject who performs the activities in the most representative way and pinpoint the activities that show more variation among the subjects. We observe that the type of pre-processing used affects the results of the comparisons but that the different distance measures do not alter the comparison results as much. We check the consistency of our analysis and results by highlighting some of our activity recognition rates based on an exhaustive set of sensor unit, sensor type and subject combinations. We expect the results to be useful for dynamic sensor unit/type selection, for deciding whether to perform user-specific training and for designing more effective classifiers in activity recognition.Item Open Access Karşılıklı bilgi ölçütü kullanılarak giyilebilir hareket duyucu sinyallerinin aktivite tanıma amaçlı analizi(IEEE, 2014-04) Dobrucalı, Oğuzcan; Barshan, BillurGiyilebilir hareket duyucuları ile insan aktivitelerinin saptanmasında, uygun duyucu yapılanışının seçimi önem taşıyan bir konudur. Bu konu, kullanılacak duyucuların sayısının, türünün, sabitlenecekleri konum ve yönelimin belirlenmesi problemlerini içermektedir. Literatürde konuyla ilgili önceki çalışmalarda araştırmacılar, kendi seçtikleri duyucu yapılanışları ile diğer olası duyucu yapılanışlarını, söz konusu yapılanışlar ile insan aktivitelerini ayırt etme başarımlarına göre karşılaştırmışlardır. Ancak, söz konusu ayırt etme başarımlarının, kullanılan öznitelikler ve sınıflandırıcılara bağlı olduğu yadsınamaz. Bu çalışmada karşılıklı bilgi ölçütü kullanılarak duyucu yapılanışları, duyuculardan kaydedilen ham ölçümlerin zaman uzayındaki dağılımlarına göre belirlenmektedir. Bedenin farklı noktalarında bulunan ivmeölçer, dönüölçer ve manyetometrelerin ölçüm eksenleri arasından, gerçekleştirilen insan aktiviteleri hakkında en çok bilgi sağlayanları saptanmıştır.Item Open Access A memory efficient novel deep learning architecture enabling diverse feature extraction on wearable motion sensor data(2022-09) Koşar, EnesExtracting representative features to recognize human activities through the use of wearables is an area of on-going research. We propose a novel hybrid net-work architecture to recognize human activities through the use of wearable motion sensors and deep learning techniques. The long short-term memory (LSTM) and the 2D convolutional neural network (CNN) branches of the model that run in parallel receive the raw signals and their spectrograms, respectively. We compare the classification performance of the proposed network with five commonly used network architectures: 1D CNN, 2D CNN, LSTM, standard 1D CNN-LSTM, and an alternative 1D CNN-LSTM model. We tune the hyper-parameters of all six models using Bayesian optimization and test the models on two publicly available datasets. The proposed 2D CNN-LSTM architecture achieves the highest aver-age accuracies of 95.66% and 92.95% on the two datasets, which are, respectively, 2.45% and 3.18% above those of the 2D CNN model that ranks the second. User identification is another problem that we have addressed in this thesis. Firstly, we use binary classifier models to detect activity signals that are useful for the user identity recognition task. Useful signals are transmitted to the next module and used by the proposed deep learning model for user identity recognition. Moreover, we investigate feature transfer between the human activity and user identity recognition tasks which enables shortening the training processes by 8.7 to 17 times without a significant degradation in classification accuracies. Finally, we elaborate on reducing the model sizes of the proposed models for human activity and user identity recognition problems. By using transfer learning, pooling layers, and eight-bit weight quantization methods, we have reduced the model sizes by 17–116 times without a significant degradation in classification accuracies.Item Open Access Position invariance for wearables: interchangeability and single-unit usage via machine learning(IEEE, 2021) Yurtman, Aras; Barshan, Billur; Redif, S.We propose a new methodology to attain invariance to the positioning of body-worn motion-sensor units for recognizing everyday and sports activities. We first consider random interchangeability of the sensor units so that the user does not need to distinguish between them before wearing. To this end, we propose to use the compact singular value decomposition (SVD) that significantly reduces the accuracy degradation caused by random interchanging of the units. Secondly, we employ three variants of a generalized classifier that requires wearing only a single sensor unit on any one of the body parts to classify the activities. We combine both approaches with our previously developed methods to achieve invariance to both position and orientation, which ultimately allows the user significant flexibility in sensor-unit placement (position and orientation). We assess the performance of our proposed approach on a publicly available activity dataset recorded by body-worn motion-sensor units. Experimental results suggest that there is a tolerable reduction in accuracy, which is justified by the significant flexibility and convenience offered to users when placing the units.Item Open Access Sensor-activity relevance in human activity recognition with wearable motion sensors and mutual information criterion(Springer, 2014) Dobrucalı Oğuzhan; Barshan, BillurSelecting a suitable sensor configuration is an important aspect of recognizing human activities with wearable motion sensors. This problem encompasses selecting the number and type of the sensors, configuring them on the human body, and identifying the most informative sensor axes. In earlier work, researchers have used customized sensor configurations and compared their activity recognition rates with those of others. However, the results of these comparisons are dependent on the feature sets and the classifiers employed. In this study, we propose a novel approach that utilizes the time-domain distributions of the raw sensor measurements. We determine the most informative sensor types (among accelerometers, gyroscopes, and magnetometers), sensor locations (among torso, arms, and legs), and measurement axes (among three perpendicular coordinate axes at each sensor) based on the mutual information criterion.