Browsing by Subject "Human activity classification"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Human activity classification with deep learning using FMCW radar(2022-09) Ege, MertHuman Activity Recognition (HAR) has recently attracted academic research attention and is used for purposes such as healthcare systems, surveillance-based security, sports activities, and entertainment. Deep Learning is also frequently used in Human Activity Recognition, as it shows superior performance in subjects such as Computer Vision and Natural Language Processing. FMCW radar data is a good choice for Human Activity Recognition as it works better than cameras under challenging situations such as rainy and foggy conditions. However, the work in this field does not progress as dynamically as in the camera-based area. This can be attributed to radar-based models that do not perform as well as camera-based models. This thesis proposes four new models to improve HAR performance using FMCW radar data. These models are CNN-based, LSTM-based, LSTM- and GRU-based, and Siamese-based. For feature extraction, the CNN-based model uses CNN blocks, the LSTM-based model uses LSTM blocks, and the LSTM-and GRU-based model uses LSTM and GRU blocks in parallel. Furthermore, the Siamese-based model is fed in parallel from three different radars (multi-input). Due to the Siamese Network nature, parallel paths will have the same weight. On the other hand, after feature extraction, all models use dense layers to classify human motion. To our best knowledge, the Siamese-based model is used for the first time in multi-input data for the classification of human movement. This model outper-forms the state-of-the-art models by using various features of radars operating at different frequencies in terms of classification accuracy. All codes and their esults can be found at "https://github.com/mertege/Thesis Experiments".Item Open Access Recognizing daily and sports activities in two open source machine learning environments using body-worn sensor units(Oxford University Press, 2014-11) Barshan, B.; Yüksek, M. C.This study provides a comparative assessment on the different techniques of classifying human activities performed while wearing inertial and magnetic sensor units on the chest, arms and legs. The gyroscope, accelerometer and the magnetometer in each unit are tri-axial. Naive Bayesian classifier, artificial neural networks (ANNs), dissimilarity-based classifier, three types of decision trees, Gaussian mixture models (GMMs) and support vector machines (SVMs) are considered. A feature set extracted from the raw sensor data using principal component analysis is used for classification. Three different cross-validation techniques are employed to validate the classifiers. A performance comparison of the classifiers is provided in terms of their correct differentiation rates, confusion matrices and computational cost. The highest correct differentiation rates are achieved with ANNs (99.2%), SVMs (99.2%) and a GMM (99.1%). GMMs may be preferable because of their lower computational requirements. Regarding the position of sensor units on the body, those worn on the legs are the most informative. Comparing the different sensor modalities indicates that if only a single sensor type is used, the highest classification rates are achieved with magnetometers, followed by accelerometers and gyroscopes. The study also provides a comparison between two commonly used open source machine learning environments (WEKA and PRTools) in terms of their functionality, manageability, classifier performance and execution times. © 2013 © The British Computer Society 2013. All rights reserved.