Reinforcement learning as a means of dynamic aggregate QoS provisioning
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Volume
Issue
Pages
Language
Type
Journal Title
Journal ISSN
Volume Title
Citation Stats
Attention Stats
Series
Abstract
Dynamic capacity management (or dynamic provisioning) is the process of dynamically changing the capacity allocation (reservation) of a virtual path (or a pseudo-wire) established between two network end points. This process is based on certain criteria including instantaneous traffic load for the pseudo-wire, network utilization, hour of day, or day of week. Frequent adjustment of the capacity yields a scalability issue in the form of a significant amount of message distribution and processing (i.e., signaling) in the network elements involved in the capacity update process. We therefore use the term “signaling rate” for the number of capacity updates per unit time. On the other hand, if the capacity is adjusted once and for the highest loaded traffic conditions, a significant amount of bandwidth may be wasted depending on the actual traffic load. There is then a need for dynamic capacity management that takes into account the tradeoff between signaling scalability and bandwidth efficiency. In this paper, we introduce a Markov decision framework for an optimal capacity management scheme. Moreover, for problems with large sizes and for which the desired signaling rate is imposed as a constraint, we provide suboptimal schemes using reinforcement learning. Our numerical results demonstrate that the reinforcement learning schemes that we propose provide significantly better bandwidth efficiencies than the static allocation policy without violating the signaling rate requirements of the underlying network.