Online distributed nonlinear regression via neural networks
Kozat, S. S.
2017 25th Signal Processing and Communications Applications Conference, SIU 2017
Institute of Electrical and Electronics Engineers Inc.
Item Usage Stats
MetadataShow full item record
In this paper, we study the nonlinear regression problem in a network of nodes and introduce long short term memory (LSTM) based algorithms. In order to learn the parameters of the LSTM architecture in an online manner, we put the LSTM equations into a nonlinear state space form and then introduce our distributed particle filtering (DPF) based training algorithm. Our training algorithm asymptotically achieves the optimal training performance. In our simulations, we illustrate the performance improvement achieved by the introduced algorithm with respect to the conventional methods. © 2017 IEEE.
KeywordsDistributed particle filtering
Long short term memory network
Monte Carlo methods
Signal filtering and prediction
State space methods
Nonlinear regression problems
Nonlinear state space
Short term memory
Permalink (Please cite this version)http://hdl.handle.net/11693/37600
Showing items related by title, author, creator and subject.
Vanli, N.D.; Sayin, M.O.; Ergüt, S.; Kozat, S.S. (European Signal Processing Conference, EUSIPCO, 2014)We investigate the problem of adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper ...
Denizcan Vanli, N.; Kozat, S.S. (IEEE Computer Society, 2014)In this paper, we consider the problem of sequential nonlinear regression and introduce an efficient learning algorithm using context trees. Specifically, the regressor space is partitioned and the resulting regions are ...
Vanli, N. D.; Kozat, S. S. (IEEE, 2014)In this paper, we investigate adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper ...