Big-data streaming applications scheduling based on staged multi-armed bandits
Van Der Schaar, M.
IEEE Transactions on Computers
Institute of Electrical and Electronics Engineers
3591 - 3605
Item Usage Stats
MetadataShow full item record
Several techniques have been recently proposed to adapt Big-Data streaming applications to existing many core platforms. Among these techniques, online reinforcement learning methods have been proposed that learn how to adapt at run-time the throughput and resources allocated to the various streaming tasks depending on dynamically changing data stream characteristics and the desired applications performance (e.g., accuracy). However, most of state-of-the-art techniques consider only one single stream input in its application model input and assume that the system knows the amount of resources to allocate to each task to achieve a desired performance. To address these limitations, in this paper we propose a new systematic and efficient methodology and associated algorithms for online learning and energy-efficient scheduling of Big-Data streaming applications with multiple streams on many core systems with resource constraints. We formalize the problem of multi-stream scheduling as a staged decision problem in which the performance obtained for various resource allocations is unknown. The proposed scheduling methodology uses a novel class of online adaptive learning techniques which we refer to as staged multi-armed bandits (S-MAB). Our scheduler is able to learn online which processing method to assign to each stream and how to allocate its resources over time in order to maximize the performance on the fly, at run-time, without having access to any offline information. The proposed scheduler, applied on a face detection streaming application and without using any offline information, is able to achieve similar performance compared to an optimal semi-online solution that has full knowledge of the input stream where the differences in throughput, observed quality, resource usage and energy efficiency are less than 1, 0.3, 0.2 and 4 percent respectively.
multiple streams processing
Reinforcement learning method
Published Version (Please cite this version)http://dx.doi.org/10.1109/TC.2016.2550454
Showing items related by title, author, creator and subject.
Tekin, Cem; Yoon, J.; Van Der Schaar, M. (AAAI Press, 2016)With the advances in the field of medical informatics, automated clinical decision support systems are becoming the de facto standard in personalized diagnosis. In order to establish high accuracy and confidence in ...
Ozcelik, E.; Cagiltay, N. E.; Ozcelik, N. S. (Pergamon Press, 2013)Considering the role of games for educational purposes, there has an increase in interest among educators in applying strategies used in popular games to create more engaging learning environments. Learning is more fun and ...
Vanli, N. D.; Kozat, S. S. (Institute of Electrical and Electronics Engineers Inc., 2015)We study sequential prediction of real-valued, arbitrary, and unknown sequences under the squared error loss as well as the best parametric predictor out of a large, continuous class of predictors. Inspired by recent results ...