Neyshabouri, M. M.Demir, O.Delibalta, I.Kozat, S. S.2018-04-122018-04-1220171863-1703http://hdl.handle.net/11693/37451This paper considers the problem of online piecewise linear regression for big data applications. We introduce an algorithm, which sequentially achieves the performance of the best piecewise linear (affine) model with optimal partition of the space of the regressor vectors in an individual sequence manner. To this end, our algorithm constructs a class of 2 D sequential piecewise linear models over a set of partitions of the regressor space and efficiently combines them in the mixture-of-experts setting. We show that the algorithm is highly efficient with computational complexity of only O(mD2) , where m is the dimension of the regressor vectors. This efficient computational complexity is achieved by efficiently representing all of the 2 D models using a “lexicographical splitting graph.” We analyze the performance of our algorithm without any statistical assumptions, i.e., our results are guaranteed to hold. Furthermore, we demonstrate the effectiveness of our algorithm over the well-known data sets in the machine learning literature with computational complexity fraction of the state of the art.EnglishLexicographical splittingNonlinear regressionOnline learningPiecewise linearHighly efficient nonlinear regression for big data with lexicographical splittingArticle10.1007/s11760-016-0972-8