Civek, B. C.Kozat, S. S.2018-04-122018-04-1220171041-4347http://hdl.handle.net/11693/37105We investigate the problem of sequential linear data prediction for real life big data applications. The second order algorithms, i.e., Newton-Raphson Methods, asymptotically achieve the performance of the 'best' possible linear data predictor much faster compared to the first order algorithms, e.g., Online Gradient Descent. However, implementation of these second order methods results in a computational complexity in the order of $O(M2)$ for an $M$ dimensional feature vector, where the first order methods offer complexity in the order of $O(M)$. Because of this extremely high computational need, their usage in real life big data applications is prohibited. To this end, in order to enjoy the outstanding performance of the second order methods, we introduce a highly efficient implementation where the computational complexity of these methods is reduced from $O(M2)$ to $O(M)$. The presented algorithm provides the well-known merits of the second order methods while offering a computational complexity similar to the first order methods. We do not rely on any statistical assumptions, hence, both regular and fast implementations achieve the same performance in terms of mean square error. We demonstrate the efficiency of our algorithm on several sequential big datasets. We also illustrate the numerical stability of the presented algorithm. © 1989-2012 IEEE.EnglishBig dataHighly efficientNewton-RaphsonSequential data predictionEfficient implementation of Newton-raphson methods for sequential data predictionArticle10.1109/TKDE.2017.2754380