Comprehensive lower bounds on sequential prediction
Vanlı, N. Denizcan
Sayın, Muhammed O.
Kozat, Süleyman S.
European Signal Processing Conference
1193 - 1196
Item Usage Stats
MetadataShow full item record
We study the problem of sequential prediction of real-valued sequences under the squared error loss function. While refraining from any statistical and structural assumptions on the underlying sequence, we introduce a competitive approach to this problem and compare the performance of a sequential algorithm with respect to the large and continuous class of parametric predictors. We define the performance difference between a sequential algorithm and the best parametric predictor as regret, and introduce a guaranteed worst-case lower bounds to this relative performance measure. In particular, we prove that for any sequential algorithm, there always exists a sequence for which this regret is lower bounded by zero. We then extend this result by showing that the prediction problem can be transformed into a parameter estimation problem if the class of parametric predictors satisfy a certain property, and provide a comprehensive lower bound to this case.
Parameter estimation problems
Squared error loss functions
Published Version (Please cite this version)https://ieeexplore.ieee.org/document/6952418
Showing items related by title, author, creator and subject.
Vanli, N. D.; Kozat, S. S. (Institute of Electrical and Electronics Engineers Inc., 2015)We study sequential prediction of real-valued, arbitrary, and unknown sequences under the squared error loss as well as the best parametric predictor out of a large, continuous class of predictors. Inspired by recent results ...
Dayanik, S. (Institute for Operations Research and the Management Sciences (I N F O R M S), 2010)Suppose that a Wiener process gains a known drift rate at some unobservable disorder time with some zero-modified exponential distribution. The process is observed only at known fixed discrete time epochs, which may not ...
Denizcan Vanli, N.; Kozat, S.S. (IEEE Computer Society, 2014)In this paper, we consider the problem of sequential nonlinear regression and introduce an efficient learning algorithm using context trees. Specifically, the regressor space is partitioned and the resulting regions are ...