Vural, Nuri Mert2021-01-272021-01-272020-122020-122021-01-26http://hdl.handle.net/11693/54922Cataloged from PDF version of article.Thesis (Master's): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2020.Includes bibliographical references (leaves 49-53).Recurrent Neural Networks (RNNs) are widely used for online regression due to their ability to learn nonlinear temporal dependencies. As an RNN model, Long-Short-Term-Memory Networks (LSTMs) are commonly preferred in prac-tice, since these networks are capable of learning long-term dependencies while avoiding the exploding gradient problem. On the other hand, the performance improvement of LSTMs usually comes with the price of their large parameter size, which makes their training significantly demanding in terms of computational and data requirements. In this thesis, we address the computational challenges of LSTM training. We introduce two training algorithms, designed for obtaining the online regression performance of LSTMs with less computational requirements than the state-of-the-art. The introduced algorithms are truly online, i.e., they do not assume any underlying data generating process and future information, except that the dataset is bounded. We discuss theoretical guarantees of the introduced algo-rithms, along with their asymptotic convergence behavior. Finally, we demon-strate their performance through extensive numerical studies on real and synthetic datasets, and show that they achieve the regression performance of LSTMs with significantly shorter training times.xii, 74 leaves ; 30 cm.Englishinfo:eu-repo/semantics/openAccessLong-short-term-memoryRecurrent neural networksOnline opti-mizationKalman filteringSequential learningEfficient online training algorithms for recurrent neural networksYineleyici sinir ağları için verimli çevrimici eğitim algoritmalarıThesisB150715