A highly efficient recurrent neural network architecture for data regression
dc.contributor.author | Ergen, Tolga | en_US |
dc.contributor.author | Ceyani, Emir | en_US |
dc.coverage.spatial | Izmir, Turkey | en_US |
dc.date.accessioned | 2019-02-21T16:05:10Z | |
dc.date.available | 2019-02-21T16:05:10Z | |
dc.date.issued | 2018 | en_US |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.description | Date of Conference: 2-5 May 2018 | en_US |
dc.description.abstract | In this paper, we study online nonlinear data regression and propose a highly efficient long short term memory (LSTM) network based architecture. Here, we also introduce on-line training algorithms to learn the parameters of the introduced architecture. We first propose an LSTM based architecture for data regression. To diminish the complexity of this architecture, we use an energy efficient operator (ef-operator) instead of the multiplication operation. We then factorize the matrices of the LSTM network to reduce the total number of parameters to be learned. In order to train the parameters of this structure, we introduce online learning methods based on the exponentiated gradient (EG) and stochastic gradient descent (SGD) algorithms. Experimental results demonstrate considerable performance and efficiency improvements provided by the introduced architecture. | |
dc.description.provenance | Made available in DSpace on 2019-02-21T16:05:10Z (GMT). No. of bitstreams: 1 Bilkent-research-paper.pdf: 222869 bytes, checksum: 842af2b9bd649e7f548593affdbafbb3 (MD5) Previous issue date: 2018 | en |
dc.identifier.doi | 10.1109/SIU.2018.8404708 | |
dc.identifier.isbn | 9781538615010 | |
dc.identifier.uri | http://hdl.handle.net/11693/50236 | |
dc.language.iso | Turkish | |
dc.publisher | IEEE | |
dc.relation.isversionof | https://doi.org/10.1109/SIU.2018.8404708 | |
dc.source.title | 2018 26th Signal Processing and Communications Applications Conference (SIU) | en_US |
dc.subject | Ef-operator | en_US |
dc.subject | Exponentiated gradient | en_US |
dc.subject | Gradient descent | en_US |
dc.subject | Long short term memory network | en_US |
dc.subject | Matrix factorization | en_US |
dc.title | A highly efficient recurrent neural network architecture for data regression | en_US |
dc.title.alternative | Veri bağlanımı için yüksek verimli yinelemeli sinir ağı yapısı | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- A_highly_efficient_recurrent_neural_network_architecture_for_data_regression.pdf
- Size:
- 499.74 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version