Highly efficient nonlinear regression for big data with lexicographical splitting

Date

2017

Authors

Neyshabouri, M. M.
Demir, O.
Delibalta, I.
Kozat, S. S.

Editor(s)

Advisor

Supervisor

Co-Advisor

Co-Supervisor

Instructor

BUIR Usage Stats
2
views
29
downloads

Citation Stats

Series

Abstract

This paper considers the problem of online piecewise linear regression for big data applications. We introduce an algorithm, which sequentially achieves the performance of the best piecewise linear (affine) model with optimal partition of the space of the regressor vectors in an individual sequence manner. To this end, our algorithm constructs a class of 2 D sequential piecewise linear models over a set of partitions of the regressor space and efficiently combines them in the mixture-of-experts setting. We show that the algorithm is highly efficient with computational complexity of only O(mD2) , where m is the dimension of the regressor vectors. This efficient computational complexity is achieved by efficiently representing all of the 2 D models using a “lexicographical splitting graph.” We analyze the performance of our algorithm without any statistical assumptions, i.e., our results are guaranteed to hold. Furthermore, we demonstrate the effectiveness of our algorithm over the well-known data sets in the machine learning literature with computational complexity fraction of the state of the art.

Source Title

Signal, Image and Video Processing

Publisher

Springer London

Course

Other identifiers

Book Title

Degree Discipline

Degree Level

Degree Name

Citation

Published Version (Please cite this version)

Language

English