Browsing by Subject "GRU"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access A deep learning based decoder for concatenated coding over deletion channels(IEEE, 2024) Kargı, Eksal Uras; Duman, Tolga MeteIn this paper, we introduce a deep learning-based decoder designed for concatenated coding schemes over a deletion/substitution channel. Specifically, we focus on serially concatenated codes, where the outer code is either a convolutional or a low-density parity-check (LDPC) code, and the inner code is a marker code. We utilize Bidirectional Gated Recurrent Units (BI-GRUs) as log-likelihood ratio (LLR) estimators and outer code decoders for estimating the message bits. Our results indicate that decoders powered by BI-GRUs perform comparably in terms of error rates with the MAP detection of the marker code. We also find that a single network can work well for a wide range of channel parameters. In addition, it is possible to use a single BI-GRU based network to estimate the message bits via one-shot decoding when the outer code is a convolutional code. 11Code is available at https://github.com/Bilkent-CTAR-Lab/DNN-for-Deletion-ChannelItem Open Access Online additive updates with FFT-IFFT operator on the GRU neural networks(IEEE, 2018) Mirza, Ali H.In this paper, we derived the online additive updates of gated recurrent unit (GRU) network by using fast fourier transform-inverse fast fourier transform (FFT-IFFT) operator. In the gating process of the GRU networks, we work in the frequency domain and execute all the linear operations. For the non-linear functions in the gating process, we first shift back to the time domain and then apply non-linear GRU gating functions. Furthermore, in order to reduce the computational complexity and speed up the training process, we apply weight matrix factorization (WMF) on the FFT-IFFT variant GRU network. We then compute the online additive updates of the FFT-WMF based GRU networks using stochastic gradient descent (SGD) algorithm. We also used long short-term memory (LSTM) networks in place of the GRU networks. Through an extensive set of experiments, we illustrate that our proposed algorithm achieves a significant increase in performance with a decrease in computational complexity.Item Open Access Variants of combinations of additive and multiplicative updates for GRU neural networks(IEEE, 2018) Mirza, Ali H.In this paper, we formulate several variants of the mixture of both the additive and multiplicative updates using stochastic gradient descent (SGD) and exponential gradient (EG) algorithms respectively. We employ these updates on the gated recurrent unit (GRU) networks. We then derive the gradient-based updates for the parameters of the GRU networks. We propose four different updates as a mean, minimum, even-odd and balanced set of updates for the GRU network. Through an extensive set of experiments, we demonstrate that these update variants perform better than simple SGD and EG updates. Overall, we observed that GRU-Mean update achieved the minimum cumulative and steady-state error performance. We also simulated the same set of experiments on the long short-term memory (LSTM) networks.