Recurrent neural networks based online learning algorithms for distributed systems
Date
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Volume
Issue
Pages
Language
Type
Journal Title
Journal ISSN
Volume Title
Citation Stats
Attention Stats
Usage Stats
views
downloads
Series
Abstract
In this paper, we investigate online parameter learning for Long Short Term Memory (LSTM) architectures in distributed networks. Here, we first introduce an LSTM based structure for regression. Then, we provide the equations of this structure in a state space form for each node in our network. Using this form, we then learn the parameters via our Distributed Particle Filtering based (DPF) training method. Our training method asymptotically converges to the optimal parameter set provided that we satisfy certain trivial requirements. While achieving this performance, our training method only causes a computational load that is similar to the efficient first order gradient based training methods. Through real life experiments, we show substantial performance gains compared to the conventional methods.