Browsing by Author "Özer, Sedat"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Open Access Deep receiver design for multi-carrier waveforms using CNNs(IEEE, 2020) Yıldırım, Y.; Özer, Sedat; Çırpan, H. A.In this paper, a deep learning based receiver is proposed for a collection of multi-carrier wave-forms including both current and next-generation wireless communication systems. In particular, we propose to use a convolutional neural network (CNN) for jointly detection and demodulation of the received signal at the receiver in wireless environments. We compare our proposed architecture to the classical methods and demonstrate that our proposed CNN-based architecture can perform better on different multi-carrier forms including OFDM and GFDM in various simulations. Furthermore, we compare the total number of required parameters for each network for memory requirements.Item Open Access GRJointNET: 3B eksik nokta bulutları için sinerjistik tamamlama ve parça bölütleme(IEEE, 2021-07-19) Gürses, Yiğit; Taşpınar, Melisa; Yurt, Mahmut; Özer, SedatÜç boyutlu (3B) nokta bulutları üzerinde bölütleme yapmak, otonom sistemler için önemli ve gerekli bir işlemdir. Bölütleme algoritmalarının başarısı, üzerinde işlem yapılan nokta bulutlarının niteliğine (çözünürlük, tamlık vb.) bağlıdır. Dolayısıyla, nokta bulutundaki mevcut eksiklikler, nokta bulutu tabanlı uygulamaların başarısını düşürmektedir. Bu konuda, güncel bir çalısma olan GRNet, eksik nokta bulutlarını tamamlamaya odaklanan başarılı bir algoritmadır, ancak bölütleme yeteneği yoktur. Biz bu çalışmada, GRNet üzerine geliştirdigimiz derin öğrenme tabanlı GRJointNet algoritmasını sunmaktayız. GRJointNet hem bir nokta bulutundaki eksik noktaları tamamlamakta, hem de onun yapamadığı parça bölütlemesini de yapmaktadır. Bu işlemler elde ettikleri verileri birbirlerini desteklemek için kullanmaktadır. ShapeNet-Part veri kümesinde yapılmış deneylerimiz, GRJointNet algoritmasının nokta bulutu tamamlamada GRNet’den daha başarılı olduğunu göstermektedir. Aynı zamanda, GRNet bölütleme yapamazken, GRJointNet bu özelliği de kazanmıştır. Dolayısıyla bu çalışma nokta bulutlarının 3B bilgisayarlı görüde kullanışlılığını arttırmak adına umut vadetmektedir.Item Open Access Offloading deep learning empowered image segmentation from UAV to edge server(IEEE, 2021-08-30) İlhan, Hüseyin Enes; Özer, Sedat; Kurt, Güneş Karabulut; Çırpan, Hakan AliImage and video analysis in unmanned aerial vehicle (UAV) systems have been a recent interest in many applications since the images taken by UAV systems can provide useful information in many domains including maintenance, surveillance and entertainment. However, a constraint on UAVs is having limited battery power and recent developments in the artificial intelligence (AI) domain encourages many applications to run computationally heavy algorithms on the taken UAV images. Such applications drain the power from the on-board battery rapidly, while requiring strong computationally strong resources. An alternative to that approach is offloading heavy tasks such as object segmentation to a remote (edge) server and perform the heavy computation on that server. However, the effect of the communication system and the used channel introduce noise on the transferred data and the effect of the noise due to the use of such LTE communication system on pre-trained deep networks has not been previously studied in the literature. In this paper, we study one such scenario where the images taken by UAVs and (the same images) transferred to an edge server via an LTE communication system under different scenarios. In our case, the edge server runs an off-the-shelf pretrained deep learning algorithm to segment the transmitted image. We provide an analysis of the effect of the wireless channel and the communication system on the final segmentation of the transmitted image on such a scenario.Item Open Access Simultaneous prediction of remaining-useful-life and failure-likelihood with GRU-based deep networks for predictive maintenance analysis(IEEE, 2021-08-30) Kaleli, Ali Yücel; Ünal, Aras Fırat; Özer, SedatPredictive maintenance (PdM) has been an integral part of large industrial sites collecting data from multiple sensors to reduce the maintenance power and costs with the advent of Industry 4.0. Two of the major problems in PdM used at large industrial sites are: (i) the prediction of remaining useful life (RUL); (ii) the prediction of the likelihood of failing within a predefined time period. While data oriented maintenance predictions were heavily focused on using classical techniques for such problems, recent interest shifted towards utilizing AI based solutions due to the better generalization capabilities of deep solutions. Among the time-sequence based deep networks, RNN, GRU and LSTM based networks are the most frequently used solutions. GRUs have demonstrated their faster learning capabilities with near or better prediction performance on certain tasks already. However, predicting multiple PdM tasks including both RUL and failure detection, simultaneously within the same network in an end to end manner with GRUs has not been much studied in the literature before. In this paper, we introduce a solution to predict those two tasks simultaneously within the same network based on GRUs. In our experiments we compare the performance of GRU layers to LSTM and RNN layers and report their performance on NASA dataset.Item Open Access SyNet: an ensemble network for object detection in UAV images(IEEE, 2021-05-05) Albaba, Berat Mert; Özer, SedatRecent advances in camera equipped drone applications and their widespread use increased the demand on vision based object detection algorithms for aerial images. Object detection process is inherently a challenging task as a generic computer vision problem, however, since the use of object detection algorithms on UAVs (or on drones) is relatively a new area, it remains as a more challenging problem to detect objects in aerial images. There are several reasons for that including: (i) the lack of large drone datasets including large object variance, (ii) the large orientation and scale variance in drone images when compared to the ground images, and (iii) the difference in texture and shape features between the ground and the aerial images. Deep learning based object detection algorithms can be classified under two main categories: (a) single-stage detectors and (b) multi-stage detectors. Both single-stage and multi-stage solutions have their advantages and disadvantages over each other. However, a technique to combine the good sides of each of those solutions could yield even a stronger solution than each of those solutions individually. In this paper, we propose an ensemble network, SyNet, that combines a multi-stage method with a single-stage one with the motivation of decreasing the high false negative rate of multi-stage detectors and increasing the quality of the single-stage detector proposals. As building blocks, CenterNet and Cascade R-CNN with pretrained feature extractors are utilized along with an ensembling strategy. We report the state of the art results obtained by our proposed solution on two different datasets: namely MS-COCO and visDrone with %52.1 mAP IoU=0.75 is obtained on MS-COCO val2017 dataset and %26.2 mAP IoU=0.75 is obtained on VisDrone test - set. Our code is available at: https://github.com/mertalbaba/SyNet.Item Open Access Visual object tracking in drone images with deep reinforcement learning(IEEE, 2021-05-05) Gözen, Derya; Özer, SedatThere is an increasing demand on utilizing camera equipped drones and their applications in many domains varying from agriculture to entertainment and from sports events to surveillance. In such drone applications, an essential and a common task is tracking an object of interest visually. Drone (or UAV) images have different properties when compared to the ground taken (natural) images and those differences introduce additional complexities to the existing object trackers to be directly applied on drone applications. Some important differences among those complexities include (i) smaller object sizes to be tracked and (ii) different orientations and viewing angles yielding different texture and features to be observed. Therefore, new algorithms trained on drone images are needed for the drone-based applications. In this paper, we introduce a deep reinforcement learning (RL) based single object tracker that tracks an object of interest in drone images by estimating a series of actions to find the location of the object in the next frame. This is the first work introducing a single object tracker using a deep RL-based technique for drone images. Our proposed solution introduces a novel reward function that aims to reduce the total number of actions taken to estimate the object's location in the next frame and also introduces a different backbone network to be used on low resolution images. Additionally, we introduce a set of new actions into the action library to better deal with the above-mentioned complexities. We compare our proposed solutions to a state of the art tracking algorithm from the recent literature and demonstrate up to 3.87 % improvement in precision and 3.6% improvement in IoU values on the VisDrone2019 data set. We also provide additional results on OTB-100 data set and show up to 3.15% improvement in precision on the OTB-100 data set when compared to the same previous state of the art algorithm. Lastly, we analyze the ability to handle some of the challenges faced during tracking, including but not limited to occlusion, deformation, and scale variation for our proposed solutions.Item Open Access YOLODrone+: improved YOLO architecture for object detection in UAV images(IEEE, 2022-08-29) Şahin, Öykü; Özer, SedatThe performance of object detection algorithms running on images taken from Unmanned Aerial Vehicles (UAVs) remains limited when compared to the object detection algorithms running on ground taken images. Due to its various features, YOLO based models, as a part of one-stage object detectors, are preferred in many UAV based applications. In this paper, we are proposing novel architectural improvements to the YO-LOv5 architecture. Our improvements include: (i) increasing the number of detection layers and (ii) use of transformers in the model. In order to train and test the performance of our proposed model, we used VisDrone and SkyData datasets in our paper. Our test results suggest that our proposed solutions can improve the detection accuracy.Item Open Access YOLODrone: improved YOLO architecture for object detection in drone images(IEEE, 2021-08-30) Şahin, Öykü; Özer, SedatRecent advances in robotics and computer vision fields yield emerging new applications for camera equipped drones. One such application is aerial-based object detection. However, despite the recent advances in the relevant literature, object detection remains as a challenging task in computer vision. Existing object detection algorithms demonstrate even lower performance on drone (or aerial) images since the object detection problem is a more challenging problem in aerial images, when compared to the detection task in ground-taken images. There are many reasons for that including: (i) the lack of large drone datasets with large object variance, (ii) the larger variance in both scale and orientation in drone images, and (iii) the difference in shape and texture features between the ground and the aerial images. In this paper, we introduce an improved YOLO algorithm: YOLODrone for detecting objects in drone images. We evaluate our algorithm on VisDrone2019 dataset and report improved results when compared to YOLOv3 algorithm.