Browsing by Subject "Motion capture"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
Item Open Access Automated evaluation of physical therapy exercises using multi-template dynamic time warping on wearable sensor signals(Elsevier Ireland Ltd., 2014) Yurtman, A.; Barshan, B.We develop an autonomous system to detect and evaluate physical therapy exercises using wearable motion sensors. We propose the multi-template multi-match dynamic time warping (MTMM-DTW) algorithm as a natural extension of DTW to detect multiple occurrences of more than one exercise type in the recording of a physical therapy session. While allowing some distortion (warping) in time, the algorithm provides a quantitative measure of similarity between an exercise execution and previously recorded templates, based on DTW distance. It can detect and classify the exercise types, and count and evaluate the exercises as correctly/incorrectly performed, identifying the error type, if any. To evaluate the algorithm's performance, we record a data set consisting of one reference template and 10 test executions of three execution types of eight exercises performed by five subjects. We thus record a total of 120 and 1200 exercise executions in the reference and test sets, respectively. The test sequences also contain idle time intervals. The accuracy of the proposed algorithm is 93.46% for exercise classification only and 88.65% for simultaneous exercise and execution type classification. The algorithm misses 8.58% of the exercise executions and demonstrates a false alarm rate of 4.91%, caused by some idle time intervals being incorrectly recognized as exercise executions. To test the robustness of the system to unknown exercises, we employ leave-one-exercise-out cross validation. This results in a false alarm rate lower than 1%, demonstrating the robustness of the system to unknown movements. The proposed system can be used for assessing the effectiveness of a physical therapy session and for providing feedback to the patient. © 2014 Elsevier Ireland Ltd.Item Open Access Classification of human motion based on affective state descriptors(John Wiley & Sons Ltd., 2013) Cimen, G.; Ilhan, H.; Capin, T.; Gurcay, H.Human body movements and postures carry emotion-specific information. On the basis of this motivation, the objective of this study is to analyze this information in the spatial and temporal structure of the motion capture data and extract features that are indicative of certain emotions in terms of affective state descriptors. Our contribution comprises identifying the directly or indirectly related descriptors to emotion classification in human motion and conducting a comprehensive analysis of these descriptors (features) that fall into three different categories: posture descriptors, dynamic descriptors, and frequency-based descriptors in order to measure their performance with respect to predicting the affective state of an input motion. The classification results demonstrate that no single category is sufficient by itself; the best prediction performance is achieved when all categories are combined. Copyright © 2013 John Wiley & Sons, Ltd.Item Open Access Combined filtering and key-frame reduction of motion capture data with application to 3DTV(WSCG, 2006-01-02) Önder, Onur; Erdem, Ç.; Erdem, T.; Güdükbay, Uğur; Özgüç, BülentA new method for combined filtering and key-frame reduction of motion capture data is proposed. Filtering of motion capture data is necessary to eliminate any jitter introduced by a motion capture system. Key-frame reduction, on the other hand, allows animators to easily edit motion data by representing animation curves with a significantly smaller number of key frames. The proposed technique achieves key frame reduction and jitter removal simultaneously by fitting a Hermite curve to motion capture data using dynamic programming. Copyright © UNION Agency - Science Press.Item Open Access Data-driven synthesis of realistic human motion using motion graphs(2014) Dirican, HüseyinRealistic human motions is an essential part of diverse range of media, such as feature films, video games and virtual environments. Motion capture provides realistic human motion data using sensor technology. However, motion capture data is not flexible. This drawback limits the utility of motion capture in practice. In this thesis, we propose a two-stage approach that makes the motion captured data reusable to synthesize new motions in real-time via motion graphs. Starting from a dataset of various motions, we construct a motion graph of similar motion segments and calculate the parameters, such as blending parameters, needed in the second stage. In the second stage, we synthesize a new human motion in realtime, depending on the blending techniques selected. Three different blending techniques, namely linear blending, cubic blending and anticipation-based blending, are provided to the user. In addition, motion clip preference approach, which is applied to the motion search algorithm, enable users to control the motion clip types in the result motion.Item Open Access Example-based retargeting of human motion to arbitrary mesh models(Blackwell Publishing Ltd, 2015) Celikcan, U.; Yaz I.O.; Capin, T.We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built-in surface-based deformation system. As deformation for animation purposes may require non-rigid behaviour, we augment existing rigid deformation approaches to provide volume-preserving and squash-and-stretch deformations. We demonstrate our approach on well-known mesh models along with several publicly available motion-capture sequences. We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. © 2014 The Eurographics Association and John Wiley & Sons Ltd.Item Open Access Investigating inter-subject and inter-activity variations in activity recognition using wearable motion sensors(Oxford University Press, 2016) Barshan, B.; Yurtman, A.This work investigates inter-subject and inter-activity variability of a given activity dataset and provides some new definitions to quantify such variability. The definitions are sufficiently general and can be applied to a broad class of datasets that involve time sequences or features acquired using wearable sensors. The study is motivated by contradictory statements in the literature on the need for user-specific training in activity recognition. We employ our publicly available dataset that contains 19 daily and sports activities acquired from eight participants who wear five motion sensor units each. We pre-process recorded activity time sequences in three different ways and employ absolute, Euclidean and dynamic time warping distance measures to quantify the similarity of the recorded signal patterns. We define and calculate the average inter-subject and inter-activity distances with various methods based on the raw and pre-processed time-domain data as well as on the raw and pre-processed feature vectors. These definitions allow us to identify the subject who performs the activities in the most representative way and pinpoint the activities that show more variation among the subjects. We observe that the type of pre-processing used affects the results of the comparisons but that the different distance measures do not alter the comparison results as much. We check the consistency of our analysis and results by highlighting some of our activity recognition rates based on an exhaustive set of sensor unit, sensor type and subject combinations. We expect the results to be useful for dynamic sensor unit/type selection, for deciding whether to perform user-specific training and for designing more effective classifiers in activity recognition.Item Open Access Keyframe reduction techniques for motion capture data(IEEE, 2008-05) Önder, Onur; Güdükbay, Uğur; Özgüç, Bülent; Erdem, T.; Erdem, Ç.; Özkan, M.Two methods for keyframe reduction of motion capture data are presented. Keyframe reduction of motion capture data enables animators to easily edit motion data with smaller number of keyframes. One of the approaches achieves keyframe reduction and noise removal simultaneously by fitting a curve to the motion information using dynamic programming. The other approach uses curve simplification algorithms on the motion capture data until a predefined threshold of number of keyframes is reached. Although the error rate varies with different motions, the results show that curve fitting with dynamic programming performs as good as curve simplification methods. ©2008 IEEE.Item Open Access Motion capture and human pose reconstruction from a single-view video sequence(Academic Press, 2013) Güdükbay, Uğur; Demir, I.; Dedeoǧlu, Y.We propose a framework to reconstruct the 3D pose of a human for animation from a sequence of single-view video frames. The framework for pose construction starts with background estimation and the performer's silhouette is extracted using image subtraction for each frame. Then the body silhouettes are automatically labeled using a model-based approach. Finally, the 3D pose is constructed from the labeled human silhouette by assuming orthographic projection. The proposed approach does not require camera calibration. It assumes that the input video has a static background, it has no significant perspective effects, and the performer is in an upright position. The proposed approach requires minimal user interaction. © 2013 Elsevier Inc.Item Unknown Motion capture from single video sequence(2006) Demir, İbrahim3D human pose reconstruction is a popular research area since it can be used in various applications. Currently most of the methods work for constrained environments, where multi camera views are available and camera calibration is known, or a single camera view is available, which requires intensive user effort. However most of the currently available data do not satisfy these constraints, thus they cannot be processed by these algorithms. In this thesis a framework is proposed to reconstruct 3D pose of a human for animation from a sequence of single view video frames. The framework for pose construction starts with background estimation. Once the image background is estimated, the body silhouette is extracted by using image subtraction for each frame. Then the body silhouettes are automatically labeled by using a model-based approach. Finally, the 3D pose is constructed from the labeled human silhouette by assuming orthographic projection. The proposed approach does not require camera calibration. The proposed framework assumes that the input video has a static background and it has no significant perspective effects and the performer is in upright position.Item Unknown A multi scale motion saliency method for keyframe extraction from motion capture sequences(2010) Halit, CihanMotion capture is an increasingly popular animation technique; however data acquired by motion capture can become substantial. This makes it di cult to use motion capture data in a number of applications, such as motion editing, motion understanding, automatic motion summarization, motion thumbnail generation, or motion database search and retrieval. To overcome this limitation, we propose an automatic approach to extract keyframes from a motion capture sequence. We treat the input sequence as motion curves, and obtain the most salient parts of these curves using a new proposed metric, called 'motion saliency'. We select the curves to be analyzed by a dimension reduction technique, Principal Component Analysis. We then apply frame reduction techniques to extract the most important frames as keyframes of the motion. With this approach, around 8% of the frames are selected to be keyframes for motion capture sequences. We have quanti ed our results both mathematically and through user tests.Item Unknown Multiscale motion saliency for keyframe extraction from motion capture sequences(John Wiley & Sons Ltd., 2011) Halit, C.; Capin, T.Motion capture is an increasingly popular animation technique; however data acquired by motion capture can become substantial. This makes it difficult to use motion capture data in a number of applications, such as motion editing, motion understanding, automatic motion summarization, motion thumbnail generation, or motion database search and retrieval. To overcome this limitation, we propose an automatic approach to extract keyframes from a motion capture sequence. We treat the input sequence as motion curves, and obtain the most salient parts of these curves using a new proposed metric, called 'motion saliency'. We select the curves to be analysed by a dimension reduction technique, Principal Component Analysis (PCA). We then apply frame reduction techniques to extract the most important frames as keyframes of the motion. With this approach, around 8% of the frames are selected to be keyframes for motion capture sequences. © 2011 John Wiley & Sons, Ltd.Item Unknown Real-time virtual fitting with body measurement and motion smoothing(Pergamon Press, 2014) Gültepe, U.; Güdükbay, UğurWe present a novel virtual fitting room framework using a depth sensor, which provides a realistic fitting experience with customized motion filters, size adjustments and physical simulation. The proposed scaling method adjusts the avatar and determines a standardized apparel size according to the user's measurements, prepares the collision mesh and the physics simulation, with a total of 1 s preprocessing time. The real-time motion filters prevent unnatural artifacts due to the noise from depth sensor or self-occluded body parts. We apply bone splitting to realistically render the body parts near the joints. All components are integrated efficiently to keep the frame rate higher than previous works while not sacrificing realism.Item Open Access Searching for complex human activities with no visual examples(2008) Ikizler, N.; Forsyth, D.A.We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing. © 2008 Springer Science+Business Media, LLC.