Ergen, TolgaŞahin, S. OnurKozat, S. Serdar2019-02-212019-02-2120189781538615010http://hdl.handle.net/11693/50241Date of Conference: 2-5 May 2018In this paper, we investigate online parameter learning for Long Short Term Memory (LSTM) architectures in distributed networks. Here, we first introduce an LSTM based structure for regression. Then, we provide the equations of this structure in a state space form for each node in our network. Using this form, we then learn the parameters via our Distributed Particle Filtering based (DPF) training method. Our training method asymptotically converges to the optimal parameter set provided that we satisfy certain trivial requirements. While achieving this performance, our training method only causes a computational load that is similar to the efficient first order gradient based training methods. Through real life experiments, we show substantial performance gains compared to the conventional methods.TurkishDistributed systemsLong short term memory networksOnline trainingSequential regressionRecurrent neural networks based online learning algorithms for distributed systemsDağıtılmış sistemler için tekrarlanan sinir ağları merkezli çevrimiçi öğrenim algoritmalarıConference Paper10.1109/SIU.2018.8404806