Sequential nonlinear learning

buir.advisorKozat, S. Serdar
dc.contributor.authorVanlı, Nuri Denizcan
dc.date.accessioned2016-07-01T11:11:03Z
dc.date.available2016-07-01T11:11:03Z
dc.date.issued2015
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.descriptionCataloged from PDF version of article.en_US
dc.description.abstractWe study sequential nonlinear learning in an individual sequence manner, where we provide results that are guaranteed to hold without any statistical assumptions. We address the convergence and undertraining issues of conventional nonlinear regression methods and introduce algorithms that elegantly mitigate these issues using nested tree structures. To this end, in the second chapter, we introduce algorithms that adapt not only their regression functions but also the complete tree structure while achieving the performance of the best linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. In the third chapter, we propose an incremental decision tree structure and using this model, we introduce an online regression algorithm that partitions the regressor space in a data driven manner. We prove that the proposed algorithm sequentially and asymptotically achieves the performance of the optimal twice differentiable regression function for any data sequence with an unknown and arbitrary length. The computational complexity of the introduced algorithm is only logarithmic in the data length under certain regularity conditions. In the fourth chapter, we construct an online finite state (FS) predictor over hierarchical structures, whose computational complexity is only linear in the hierarchy level. We prove that the introduced algorithm asymptotically achieves the performance of the best linear combination of all FS predictors defined over the hierarchical model in a deterministic manner and and in a mean square error sense in the steady-state for certain nonstationary models. In the fifth chapter, we introduce a distributed subgradient based extreme learning machine algorithm to train single hidden layer feedforward neural networks (SLFNs). We show that using the proposed algorithm, each of the individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN in a strong deterministic sense.en_US
dc.description.degreeM.S.en_US
dc.description.statementofresponsibilityVanlı, Nuri Denizcanen_US
dc.format.extentxv, 151 leaves, Chartsen_US
dc.identifier.itemidB150954
dc.identifier.urihttp://hdl.handle.net/11693/30037
dc.language.isoEnglishen_US
dc.publisherBilkent Universityen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectSequential learningen_US
dc.subjectnonlinear modelsen_US
dc.subjectbig dataen_US
dc.subject.lccB150954en_US
dc.titleSequential nonlinear learningen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0006960.pdf
Size:
2.12 MB
Format:
Adobe Portable Document Format
Description:
Full printable version