Wearables-based user identity recognition through image representation of motion sensor data sequences and pretrained vision models

Limited Access
This item is unavailable until:
2026-02-11

Date

2025-07

Editor(s)

Advisor

Özaktaş, Billur Barshan

Supervisor

Co-Advisor

Co-Supervisor

Instructor

BUIR Usage Stats
31
views
32
downloads

Series

Abstract

The common methods employed in User Identity Recognition (UIR) and verifi cation are often vulnerable to cyber attacks, requiring more robust solutions. Mo tion sensor data and biometric data are used in tackling both the UIR and Human Activity Recognition (HAR) tasks. These tasks are mostly accomplished by using Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and CNN-LSTM hybrid models. We propose a method that employs pretrained CNN and vision transformer-based models to achieve the UIR task by classifying image representations of sensor data. We conduct a comparative study by evaluating the performance of various pretrained networks in the image classification task by pro cessing four activity datasets comprising raw data sequences. We construct a new hybrid architecture which combines DeiT-B and DenseNet201 models in a parallel configuration. This study also compares two kinds of preprocessing methods which are spectrogram and wavelet spectrogram and introduces a novel approach that is fundamentally distinct from these methods. This technique fuses raw data, spectro gram, and wavelet spectrogram information. The DeiT-B model obtains the highest accuracy as 99.76% on the DSA Dataset; however, our new hybrid architecture that combines DeiT-B and DenseNet201 performs superior.

Source Title

Publisher

Course

Other identifiers

Book Title

Degree Discipline

Electrical and Electronic Engineering

Degree Level

Master's

Degree Name

MS (Master of Science)

Citation

Published Version (Please cite this version)

Language

English

Type