Sequential nonlinear learning

Date

2015

Editor(s)

Advisor

Kozat, S. Serdar

Supervisor

Co-Advisor

Co-Supervisor

Instructor

Source Title

Print ISSN

Electronic ISSN

Publisher

Bilkent University

Volume

Issue

Pages

Language

English

Journal Title

Journal ISSN

Volume Title

Series

Abstract

We study sequential nonlinear learning in an individual sequence manner, where we provide results that are guaranteed to hold without any statistical assumptions. We address the convergence and undertraining issues of conventional nonlinear regression methods and introduce algorithms that elegantly mitigate these issues using nested tree structures. To this end, in the second chapter, we introduce algorithms that adapt not only their regression functions but also the complete tree structure while achieving the performance of the best linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. In the third chapter, we propose an incremental decision tree structure and using this model, we introduce an online regression algorithm that partitions the regressor space in a data driven manner. We prove that the proposed algorithm sequentially and asymptotically achieves the performance of the optimal twice differentiable regression function for any data sequence with an unknown and arbitrary length. The computational complexity of the introduced algorithm is only logarithmic in the data length under certain regularity conditions. In the fourth chapter, we construct an online finite state (FS) predictor over hierarchical structures, whose computational complexity is only linear in the hierarchy level. We prove that the introduced algorithm asymptotically achieves the performance of the best linear combination of all FS predictors defined over the hierarchical model in a deterministic manner and and in a mean square error sense in the steady-state for certain nonstationary models. In the fifth chapter, we introduce a distributed subgradient based extreme learning machine algorithm to train single hidden layer feedforward neural networks (SLFNs). We show that using the proposed algorithm, each of the individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN in a strong deterministic sense.

Course

Other identifiers

Book Title

Citation

item.page.isversionof