Browsing by Subject "Activity recognition"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Open Access Investigating the Performance of Wearable Motion Sensors on recognizing falls and daily activities via machine learning(Academic Press, 2022-06-30) Kavuncuoğlu, E.; Uzunhisarcıklı, E.; Barshan, Billur; Özdemir, A.T.With sensor-based wearable technologies, high precision monitoring and recognition of human physical activities in real time is becoming more critical to support the daily living requirements of the elderly. The use of sensor technologies, including accelerometers (A), gyroscopes (G), and magnetometers (M) is mostly encountered in work focused on assistive technology, ambient intelligence, context-aware systems, gait and motion analysis, sports science, and fall detection. The classification performance of four sensor type combinations is investigated through the use of four machine learning algorithms: support vector machines (SVMs), Manhattan k-nearest neighbor classifier (M.k-NN), subspace linear discriminant analysis (SLDA), and ensemble bagged decision tree (EBDT). In this context, a large dataset containing 2520 tests performed by 14 volunteers containing 16 activities of daily living (ADLs) and 20 falls was employed. In binary (fall vs. ADL) and multi-class activity (36 activities) recognition, the highest classification accuracy rate was obtained by the SVM (99.96%) and M.k-NN (95.27%) classifiers, respectively, with the AM sensor type combination in both cases. We also made our dataset publicly available to lay the groundwork for new research.Item Open Access Investigation of personal variations in activity recognition using miniature inertial sensors and magnetometers(IEEE, 2012-04) Yurtman, Aras; Barshan, BillurIn this paper, data acquired from five sensory units mounted on the human body, each containing a tri-axial accelerometer, gyroscope, and magnetometer, during 19 different human activities is used to calculate inter-subject and inter-activity variations using different methods and the results are summarized in various forms. Absolute, Euclidean, and dynamic time-warping distances are used to assess the similarity of the signals. The comparisons are made using the raw and normalized time-domain data, raw and normalized feature vectors. Firstly, inter-subject distances are averaged out per activity and per subject. Based on these values, the "best" subject is defined and identified according to his/her average distance to the others. Then, the averages and standard deviations of inter-activity distances are presented per subject, per unit, and per sensor. Moreover, the effects of removing the mean and the different distance measures on the results are discussed. © 2012 IEEE.Item Open Access Karşılıklı bilgi ölçütü kullanılarak giyilebilir hareket duyucu sinyallerinin aktivite tanıma amaçlı analizi(IEEE, 2014-04) Dobrucalı, Oğuzcan; Barshan, BillurGiyilebilir hareket duyucuları ile insan aktivitelerinin saptanmasında, uygun duyucu yapılanışının seçimi önem taşıyan bir konudur. Bu konu, kullanılacak duyucuların sayısının, türünün, sabitlenecekleri konum ve yönelimin belirlenmesi problemlerini içermektedir. Literatürde konuyla ilgili önceki çalışmalarda araştırmacılar, kendi seçtikleri duyucu yapılanışları ile diğer olası duyucu yapılanışlarını, söz konusu yapılanışlar ile insan aktivitelerini ayırt etme başarımlarına göre karşılaştırmışlardır. Ancak, söz konusu ayırt etme başarımlarının, kullanılan öznitelikler ve sınıflandırıcılara bağlı olduğu yadsınamaz. Bu çalışmada karşılıklı bilgi ölçütü kullanılarak duyucu yapılanışları, duyuculardan kaydedilen ham ölçümlerin zaman uzayındaki dağılımlarına göre belirlenmektedir. Bedenin farklı noktalarında bulunan ivmeölçer, dönüölçer ve manyetometrelerin ölçüm eksenleri arasından, gerçekleştirilen insan aktiviteleri hakkında en çok bilgi sağlayanları saptanmıştır.Item Open Access Knives are picked before slices are cut: Recognition through activity sequence analysis(ACM, 2013-10) İşcen, Ahmet; Duygulu, PınarIn this paper, we introduce a model to classify cooking activities using their visual and temporal coherence information. We fuse multiple feature descriptors for fine-grained activity recognition as we would need every single detail to catch even subtle differences between classes with low inter-class variance. Considering the observation that daily activities such as cooking are likely to be performed in sequential patterns of activities, we also model temporal coherence of activities. By combining both aspects, we show that we can improve the overall accuracy of cooking recognition tasks. © Copyright 2013 ACM.Item Open Access Sensor-activity relevance in human activity recognition with wearable motion sensors and mutual information criterion(Springer, 2014) Dobrucalı Oğuzhan; Barshan, BillurSelecting a suitable sensor configuration is an important aspect of recognizing human activities with wearable motion sensors. This problem encompasses selecting the number and type of the sensors, configuring them on the human body, and identifying the most informative sensor axes. In earlier work, researchers have used customized sensor configurations and compared their activity recognition rates with those of others. However, the results of these comparisons are dependent on the feature sets and the classifiers employed. In this study, we propose a novel approach that utilizes the time-domain distributions of the raw sensor measurements. We determine the most informative sensor types (among accelerometers, gyroscopes, and magnetometers), sensor locations (among torso, arms, and legs), and measurement axes (among three perpendicular coordinate axes at each sensor) based on the mutual information criterion.Item Open Access Two-person interaction recognition via spatial multiple instance embedding(Academic Press Inc., 2015) Sener F.; Ikizler-Cinbis, N.Abstract In this work, we look into the problem of recognizing two-person interactions in videos. Our method integrates multiple visual features in a weakly supervised manner by utilizing an embedding-based multiple instance learning framework. In our proposed method, first, several visual features that capture the shape and motion of the interacting people are extracted from each detected person region in a video. Then, two-person visual descriptors are formed. Since the relative spatial locations of interacting people are likely to complement the visual descriptors, we propose to use spatial multiple instance embedding, which implicitly incorporates the distances between people into the multiple instance learning process. Experimental results on two benchmark datasets validate that using two-person visual descriptors together with spatial multiple instance learning offers an effective way for inferring the type of the interaction. © 2015 Elsevier Inc.