Investigating inter-subject and inter-activity variations in activity recognition using wearable motion sensors
dc.citation.epage | 1362 | en_US |
dc.citation.issueNumber | 9 | en_US |
dc.citation.spage | 1345 | en_US |
dc.citation.volumeNumber | 59 | en_US |
dc.contributor.author | Barshan, B. | en_US |
dc.contributor.author | Yurtman, A. | en_US |
dc.date.accessioned | 2018-04-12T10:44:33Z | |
dc.date.available | 2018-04-12T10:44:33Z | |
dc.date.issued | 2016 | en_US |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.description.abstract | This work investigates inter-subject and inter-activity variability of a given activity dataset and provides some new definitions to quantify such variability. The definitions are sufficiently general and can be applied to a broad class of datasets that involve time sequences or features acquired using wearable sensors. The study is motivated by contradictory statements in the literature on the need for user-specific training in activity recognition. We employ our publicly available dataset that contains 19 daily and sports activities acquired from eight participants who wear five motion sensor units each. We pre-process recorded activity time sequences in three different ways and employ absolute, Euclidean and dynamic time warping distance measures to quantify the similarity of the recorded signal patterns. We define and calculate the average inter-subject and inter-activity distances with various methods based on the raw and pre-processed time-domain data as well as on the raw and pre-processed feature vectors. These definitions allow us to identify the subject who performs the activities in the most representative way and pinpoint the activities that show more variation among the subjects. We observe that the type of pre-processing used affects the results of the comparisons but that the different distance measures do not alter the comparison results as much. We check the consistency of our analysis and results by highlighting some of our activity recognition rates based on an exhaustive set of sensor unit, sensor type and subject combinations. We expect the results to be useful for dynamic sensor unit/type selection, for deciding whether to perform user-specific training and for designing more effective classifiers in activity recognition. | en_US |
dc.description.provenance | Made available in DSpace on 2018-04-12T10:44:33Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 179475 bytes, checksum: ea0bedeb05ac9ccfb983c327e155f0c2 (MD5) Previous issue date: 2016 | en |
dc.identifier.doi | 10.1093/comjnl/bxv093 | en_US |
dc.identifier.issn | 0010-4620 | |
dc.identifier.uri | http://hdl.handle.net/11693/36569 | |
dc.language.iso | English | en_US |
dc.publisher | Oxford University Press | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1093/comjnl/bxv093 | en_US |
dc.source.title | Computer Journal | en_US |
dc.subject | Accelerometer | en_US |
dc.subject | Activity recognition and classification | en_US |
dc.subject | Dynamic time warping | en_US |
dc.subject | Feature extraction | en_US |
dc.subject | Feature reduction | en_US |
dc.subject | Gyroscope | en_US |
dc.subject | Inertial sensors | en_US |
dc.subject | Inter-activity variation | en_US |
dc.subject | Inter-subject variation | en_US |
dc.subject | Magnetometers | en_US |
dc.subject | Motion capture | en_US |
dc.subject | Motion sensors | en_US |
dc.subject | Wearable sensing | en_US |
dc.title | Investigating inter-subject and inter-activity variations in activity recognition using wearable motion sensors | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Investigating Inter-Subject and Inter-Activity Variations in Activity Recognition Using Wearable Motion Sensors.pdf
- Size:
- 384.13 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version