A unified approach to universal prediction: Generalized upper and lower bounds
dc.citation.epage | 651 | en_US |
dc.citation.issueNumber | 3 | en_US |
dc.citation.spage | 646 | en_US |
dc.citation.volumeNumber | 26 | en_US |
dc.contributor.author | Vanli, N. D. | en_US |
dc.contributor.author | Kozat, S. S. | en_US |
dc.date.accessioned | 2016-02-08T09:58:40Z | |
dc.date.available | 2016-02-08T09:58:40Z | |
dc.date.issued | 2015 | en_US |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.description.abstract | We study sequential prediction of real-valued, arbitrary, and unknown sequences under the squared error loss as well as the best parametric predictor out of a large, continuous class of predictors. Inspired by recent results from computational learning theory, we refrain from any statistical assumptions and define the performance with respect to the class of general parametric predictors. In particular, we present generic lower and upper bounds on this relative performance by transforming the prediction task into a parameter learning problem. We first introduce the lower bounds on this relative performance in the mixture of experts framework, where we show that for any sequential algorithm, there always exists a sequence for which the performance of the sequential algorithm is lower bounded by zero. We then introduce a sequential learning algorithm to predict such arbitrary and unknown sequences, and calculate upper bounds on its total squared prediction error for every bounded sequence. We further show that in some scenarios, we achieve matching lower and upper bounds, demonstrating that our algorithms are optimal in a strong minimax sense such that their performances cannot be improved further. As an interesting result, we also prove that for the worst case scenario, the performance of randomized output algorithms can be achieved by sequential algorithms so that randomized output algorithms do not improve the performance. © 2012 IEEE. | en_US |
dc.description.provenance | Made available in DSpace on 2016-02-08T09:58:40Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2015 | en |
dc.identifier.doi | 10.1109/TNNLS.2014.2317552 | en_US |
dc.identifier.issn | 0216-2237X | |
dc.identifier.uri | http://hdl.handle.net/11693/22325 | |
dc.language.iso | English | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/TNNLS.2014.2317552 | en_US |
dc.source.title | IEEE Transactions on Neural Networks and Learning Systems | en_US |
dc.subject | Online learning | en_US |
dc.subject | Computation theory | en_US |
dc.subject | Forecasting | en_US |
dc.subject | Sequential switching | en_US |
dc.subject | Computational learning theory | en_US |
dc.subject | Lower and upper bounds | en_US |
dc.subject | Online learning | en_US |
dc.subject | Sequential learning algorithm | en_US |
dc.subject | Sequential prediction | en_US |
dc.subject | Squared prediction errors | en_US |
dc.subject | Upper and lower bounds | en_US |
dc.subject | Worst-case performance | en_US |
dc.subject | Algorithms | en_US |
dc.title | A unified approach to universal prediction: Generalized upper and lower bounds | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- A unified approach to universal prediction Generalized upper and lower bounds.pdf
- Size:
- 199.88 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version