Browsing by Subject "Autoencoder"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Open Access Deep learning for radar signal detection in electronic warfare systems(IEEE, 2020) Nuhoglu, M. A.; Alp, Y. K.; Akyön, Fatih ÇağatayDetection of radar signals is the initial step for passive systems. Since these systems do not have prior information about received signal, application of matched filter and general likelihood ratio tests are infeasible. In this paper, we propose a new method for detecting received pulses automatically with no restriction of having intentional modulation or pulse on pulse situation. Our method utilizes a cognitive detector incorporating bidirectional long-short term memory based deep denoising autoencoders. Moreover, a novel loss function for detection is developed. Performance of the proposed method is compared to two well known detectors, namely: energy detector and time-frequency domain detector. Qualitative experiments show that the proposed method is able to detect presence of a signal with low probability of false alarm and it outperforms the other methods in all signal-to-noise ratio cases.Item Open Access Deep learning in electronic warfare systems: automatic pulse detection and intra-pulse modulation recognition(2020-12) Akyon, Fatih CagatayDetection and classification of radar systems based on modulation analysis on pulses they transmit is an important application in electronic warfare systems. Many of the present works focus on classifying modulations assuming signal detection is done beforehand without providing any detection method. In this work, we propose two novel deep-learning based techniques for automatic pulse detection and intra-pulse modulation recognition of radar signals. As the first nechnique, an LSTM based multi-task learning model is proposed for end-to-end pulse detection and modulation classification. As the second technique, re-assigned spectrogram of measured radar signal and detected outliers of its instantaneous phases filtered by a special function are used for training multiple convolutional neural networks. Automatically extracted features from the networks are fused to distinguish frequency and phase modulated signals. Another major issue on this area is the training and evaluation of supervised neural network based models. To overcome this issue we have developed an Intentional Modulation on Pulse (IMOP) measurement simulator which can generate over 15 main phase and frequency modulations with realistic pulses and noises. Simulation results show that the proposed FFCNN and MODNET techniques outperform the current stateof-the-art alternatives and is easily scalable among broad range of modulation types.Item Open Access Improving image synthesis quality in multi-contrast MRI using transfer learning via autoencoders(IEEE, 2022-08-29) Selçuk, Şahan Yoruç; Dalmaz, Onat; Ul Hassan Dar, Salman; Çukur, TolgaThe capacity of magnetic resonance imaging (MRI) to capture several contrasts within a session enables it to obtain increased diagnostic information. However, such multi-contrast MRI tests take a long time to scan, resulting in acquiring just a part of the essential contrasts. Synthetic multi-contrast MRI has the potential to improve radiological observations and consequent image analysis activities. Because of its ability to generate realistic results, generative adversarial networks (GAN) have recently been the most popular choice for medical imaging synthesis. This paper proposes a novel generative adversarial framework to improve the image synthesis quality in multi-contrast MRI. Our method uses transfer learning to adapt pre-trained autoencoder networks to the synthesis task and enhances the image synthesis quality by initializing the training process with more optimal network parameters. We demonstrate that the proposed method outperforms competing synthesis models by 0.95 dB on average on a well-known multi-contrast MRI dataset.Item Open Access Relevance feedback and sparsity handling methods for temporal data(2018-07) Eravcı, BahaeddinData with temporal ordering arises in many natural and digital processes with an increasing importance and immense number of applications. This study provides solutions to data mining problems in analyzing time series both in standalone and sparse networked cases. We initially develop a methodology for browsing time series repositories by forming new time series queries based on user annotations. The result set for each query is formed using diverse selection methods to increase the effectiveness of the relevance feedback (RF) mechanism. In addition to RF, a unique aspect of time series data is considered and representation feedback methods are proposed to converge to the outperforming representation type among various transformations based on user annotations as opposed to manual selection. These methods are based on partitioning of the result set according to representation performance and a weighting approach which amplifies different features from multiple representations. We subsequently propose the utilization of autoencoders to summarize the time series into a data-aware sparse representation to both decrease computation load and increase the accuracy. Experiments on a large variety of real data sets prove that the proposed methods improve the accuracy significantly and data-aware representations have recorded similar performances while reducing the data and computational load. As a more demanding case, the time series dataset may be incomplete needing interpolation approaches to apply data mining techniques. In this regard, we analyze a sparse time series data with an underlying time varying network. We develop a methodology to generate a road network time series dataset using noisy and sparse vehicle trajectories and evaluate the result using time varying shortest path solutions.Item Open Access Spatio-temporal assessment of pain intensity through facial transformation-based representation learning(2021-09) Erekat, Diyala Nabeel AtaThe nature of pain makes it di cult to assess due to its subjectivity and multidimensional characteristics that include intensity, duration, and location. However, the ability to assess pain in an objective and reliable manner is crucial for adequate pain management intervention as well as the diagnosis of the underlying medical cause. To this end, in this thesis, we propose a video-based approach for the automatic measurement of self-reported pain. The proposed method aims to learn an e cient facial representation by exploiting the transformation of one subject's facial expression to that of another subject's within a similar pain group. We also explore the e ect of leveraging self-reported pain scales i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the A ective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) on the reliable assessment of pain intensity. To this end, a convolutional autoencoder network is proposed to learn the facial transformation between subjects. The autoencoder's optimized weights are then used to initialize the spatio-temporal network architecture, which is further optimized by minimizing the mean absolute error of estimations in terms of each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Despite the challenging nature of this problem, the obtained results show that the proposed method improves the state of the art, and the automated assessment of pain severity is feasible and applicable to be used as a supportive tool to provide a quantitative assessment of pain in clinical settings.