BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Sequential prediction"

Filter results by typing the first few letters
Now showing 1 - 5 of 5
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Comprehensive lower bounds on sequential prediction
    (IEEE, 2014-09) Vanlı, N. Denizcan; Sayın, Muhammed O.; Ergüt, S.; Kozat, Süleyman S.
    We study the problem of sequential prediction of real-valued sequences under the squared error loss function. While refraining from any statistical and structural assumptions on the underlying sequence, we introduce a competitive approach to this problem and compare the performance of a sequential algorithm with respect to the large and continuous class of parametric predictors. We define the performance difference between a sequential algorithm and the best parametric predictor as regret, and introduce a guaranteed worst-case lower bounds to this relative performance measure. In particular, we prove that for any sequential algorithm, there always exists a sequence for which this regret is lower bounded by zero. We then extend this result by showing that the prediction problem can be transformed into a parameter estimation problem if the class of parametric predictors satisfy a certain property, and provide a comprehensive lower bound to this case.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Energy consumption forecasting via order preserving pattern matching
    (IEEE, 2014-12) Vanlı, N. Denizcan; Sayın, Muhammed O.; Yıldız, Hikmet; Göze, Tolga; Kozat, Süleyman S.
    We study sequential prediction of energy consumption of actual users under a generic loss/utility function. Particularly, we try to determine whether the energy usage of the consumer will increase or decrease in the future, which can be subsequently used to optimize energy consumption. To this end, we use the energy consumption history of the users and define finite state (FS) predictors according to the relative ordering patterns of these past observations. In order to alleviate the overfitting problems, we generate equivalence classes by tying several states in a nested manner. Using the resulting equivalence classes, we obtain a doubly exponential number of different FS predictors, one among which achieves the smallest accumulated loss, hence is optimal for the prediction task. We then introduce an algorithm to achieve the performance of this FS predictor among all doubly exponential number of FS predictors with a significantly reduced computational complexity. Our approach is generic in the sense that different tying configurations and loss functions can be incorporated into our framework in a straightforward manner. We illustrate the merits of the proposed algorithm using the real life energy usage data. © 2014 IEEE.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Sequential prediction over hierarchical structures
    (Institute of Electrical and Electronics Engineers Inc., 2016) Vanli, N. D.; Gokcesu, K.; Sayin, M. O.; Yildiz, H.; Kozat, S. S.
    We study sequential compound decision problems in the context of sequential prediction of real valued sequences. In particular, we consider finite state (FS) predictors that are constructed based on a hierarchical structure, such as the order preserving patterns of the sequence history. We define hierarchical equivalence classes by tying certain models at a hierarchy level in a recursive manner in order to mitigate undertraining problems. These equivalence classes defined on a hierarchical structure are then used to construct a super exponential number of sequential FS predictors based on their combinations and permutations. We then introduce truly sequential algorithms with computational complexity only linear in the pattern length that 1) asymptotically achieve the performance of the best FS predictor or the best linear combination of all the FS predictors in an individual sequence manner without any stochastic assumptions over any data length n under a wide range of loss functions; 2) achieve the mean square error of the best linear combination of all FS filters or predictors in the steady-state for certain nonstationary models. We illustrate the superior convergence and tracking capabilities of our algorithm with respect to several state-of-the-art methods in the literature through simulations over synthetic and real benchmark data. © 1991-2012 IEEE.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Sequential regression techniques with second order methods
    (2017-07) Civek, Burak Cevat
    Sequential regression problem is one of the widely investigated topics in the machine learning and the signal processing literatures. In order to adequately model the underlying structure of the real life data sequences, many regression methods employ nonlinear modeling approaches. In this context, in the rst chapter, we introduce highly e cient sequential nonlinear regression algorithms that are suitable for real life applications. We process the data in a truly online manner such that no storage is needed. For nonlinear modeling we use a hierarchical piecewise linear approach based on the notion of decision trees where the space of the regressor vectors is adaptively partitioned. As the rst time in the literature, we learn both the piecewise linear partitioning of the regressor space as well as the linear models in each region using highly e ective second order methods, i.e., Newton- Raphson Methods. Hence, we avoid the well-known over tting issues by using piecewise linear models and achieve substantial performance compared to the state of the art. In the second chapter, we investigate the problem of sequential prediction for real life big data applications. The second order Newton-Raphson methods asymptotically achieve the performance of the \best" possible predictor much faster compared to the rst order algorithms. However, their usage in real life big data applications is prohibited because of the extremely high computational needs. To this end, in order to enjoy the outstanding performance of the second order methods, we introduce a highly e cient implementation where the computational complexity is reduced from quadratic to linear scale. For both chapters, we demonstrate our gains over the well-known benchmark and real life data sets and provide performance results in an individual sequence manner guaranteed to hold without any statistical assumptions.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A unified approach to universal prediction: Generalized upper and lower bounds
    (Institute of Electrical and Electronics Engineers Inc., 2015) Vanli, N. D.; Kozat, S. S.
    We study sequential prediction of real-valued, arbitrary, and unknown sequences under the squared error loss as well as the best parametric predictor out of a large, continuous class of predictors. Inspired by recent results from computational learning theory, we refrain from any statistical assumptions and define the performance with respect to the class of general parametric predictors. In particular, we present generic lower and upper bounds on this relative performance by transforming the prediction task into a parameter learning problem. We first introduce the lower bounds on this relative performance in the mixture of experts framework, where we show that for any sequential algorithm, there always exists a sequence for which the performance of the sequential algorithm is lower bounded by zero. We then introduce a sequential learning algorithm to predict such arbitrary and unknown sequences, and calculate upper bounds on its total squared prediction error for every bounded sequence. We further show that in some scenarios, we achieve matching lower and upper bounds, demonstrating that our algorithms are optimal in a strong minimax sense such that their performances cannot be improved further. As an interesting result, we also prove that for the worst case scenario, the performance of randomized output algorithms can be achieved by sequential algorithms so that randomized output algorithms do not improve the performance. © 2012 IEEE.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback