Browsing by Author "Ege, Mert"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Restricted 20.yy sonlarında Kırkpınar Yağlı Güreşleri ve Başpehlivan Ahmet Taşçı(Bilkent University, 2016) Ege, Mert; Ceylan, Veysel Alperen; Sarı, Mustafa Said; Kalkan, Murat; Taşpınar, Buğrahan ŞemunItem Open Access Human activity classification with deep learning using FMCW radar(Bilkent University, 2022-09) Ege, MertHuman Activity Recognition (HAR) has recently attracted academic research attention and is used for purposes such as healthcare systems, surveillance-based security, sports activities, and entertainment. Deep Learning is also frequently used in Human Activity Recognition, as it shows superior performance in subjects such as Computer Vision and Natural Language Processing. FMCW radar data is a good choice for Human Activity Recognition as it works better than cameras under challenging situations such as rainy and foggy conditions. However, the work in this field does not progress as dynamically as in the camera-based area. This can be attributed to radar-based models that do not perform as well as camera-based models. This thesis proposes four new models to improve HAR performance using FMCW radar data. These models are CNN-based, LSTM-based, LSTM- and GRU-based, and Siamese-based. For feature extraction, the CNN-based model uses CNN blocks, the LSTM-based model uses LSTM blocks, and the LSTM-and GRU-based model uses LSTM and GRU blocks in parallel. Furthermore, the Siamese-based model is fed in parallel from three different radars (multi-input). Due to the Siamese Network nature, parallel paths will have the same weight. On the other hand, after feature extraction, all models use dense layers to classify human motion. To our best knowledge, the Siamese-based model is used for the first time in multi-input data for the classification of human movement. This model outper-forms the state-of-the-art models by using various features of radars operating at different frequencies in terms of classification accuracy. All codes and their esults can be found at "https://github.com/mertege/Thesis Experiments".Item Open Access SiameseFuse: A computationally efficient and a not-so-deep network to fuse visible and infrared images(Elsevier BV, 2022-04-22) Özer, S.; Ege, Mert; Özkanoglu, Mehmet AkifRecent developments in pattern analysis have motivated many researchers to focus on developing deep learning based solutions in various image processing applications. Fusing multi-modal images has been one such application area where the interest is combining different information coming from different modalities in a more visually meaningful and informative way. For that purpose, it is important to first extract salient features from each modality and then fuse them as efficiently and informatively as possible. Recent literature on fusing multi-modal images reports multiple deep solutions that combine both visible (RGB) and infra-red (IR) images. In this paper, we study the performance of various deep solutions available in the literature while seeking an answer to the question: “Do we really need deeper networks to fuse multi-modal images?” To have an answer for that question, we introduce a novel architecture based on Siamese networks to fuse RGB (visible) images with infrared (IR) images and report the state-of-the-art results. We present an extensive analysis on increasing the layer numbers in the architecture with the above-mentioned question in mind to see if using deeper networks (or adding additional layers) adds significant performance in our proposed solution. We report the state-of-the-art results on visually fusing given visible and IR image pairs in multiple performance metrics, while requiring the least number of trainable parameters. Our experimental results suggest that shallow networks (as in our proposed solutions in this paper) can fuse both visible and IR images as well as the deep networks that were previously proposed in the literature (we were able to reduce the total number of trainable parameters up to 96.5%, compare 2,625 trainable parameters to the 74,193 trainable parameters).Item Open Access SiameseHAR: siamese-based model for human activity classification with FMCW radars(Springer, 2023-06-03) Ege, Mert; Morgül, ÖmerHuman Activity Recognition (HAR) is an attractive task in academic researchers. Furthermore, HAR is used in many areas such as security, sports activities, health, and entertainment. Frequency Modulated Continuous Wave (FMCW) radar data is a suitable option to classify human activities since it operates more robustly than a camera in difficult weather conditions such as fog and rain. Additionally, FMCW radars cost less than cameras. However, FMCW radars are less popular than camera-based HAR systems. This is mainly because the accuracy performance of FMCW radar data is lower than that of the camera when classifying human activation This article proposes the SiameseHAR model for the classification of human movement with FMCW radar data. In this model, we use LSTM and GRU blocks in parallel. In addition, we feed radar data operating at different frequencies (10 GHz, 24 GHz, 77 GHz) to the SiameseHAR model in parallel with the Siamese architecture. Therefore, the weights of the paths that use different radar data as inputs are tied. As far as we know, it is the first time that the multi-input Siamese architecture has been used for human activity classification. The SiameseHAR model we proposed is superior to most of the state-of-the-art models.