Reinforcement learning as a means of dynamic aggregate QoS provisioning

dc.citation.epage114en_US
dc.citation.spage100en_US
dc.contributor.authorAkar, Nailen_US
dc.contributor.authorŞahin, Cemen_US
dc.coverage.spatialWarsaw, Poland
dc.date.accessioned2019-01-29T08:33:13Z
dc.date.available2019-01-29T08:33:13Z
dc.date.issued2003-03en_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.descriptionDate of Conference: 24-25 March, 2003
dc.descriptionConference name: Art-QoS: International Workshop on Architectures for Quality of Service in the Internet -International Workshop, Art-QoS 2003
dc.description.abstractDynamic capacity management (or dynamic provisioning) is the process of dynamically changing the capacity allocation (reservation) of a virtual path (or a pseudo-wire) established between two network end points. This process is based on certain criteria including instantaneous traffic load for the pseudo-wire, network utilization, hour of day, or day of week. Frequent adjustment of the capacity yields a scalability issue in the form of a significant amount of message distribution and processing (i.e., signaling) in the network elements involved in the capacity update process. We therefore use the term “signaling rate” for the number of capacity updates per unit time. On the other hand, if the capacity is adjusted once and for the highest loaded traffic conditions, a significant amount of bandwidth may be wasted depending on the actual traffic load. There is then a need for dynamic capacity management that takes into account the tradeoff between signaling scalability and bandwidth efficiency. In this paper, we introduce a Markov decision framework for an optimal capacity management scheme. Moreover, for problems with large sizes and for which the desired signaling rate is imposed as a constraint, we provide suboptimal schemes using reinforcement learning. Our numerical results demonstrate that the reinforcement learning schemes that we propose provide significantly better bandwidth efficiencies than the static allocation policy without violating the signaling rate requirements of the underlying network.en_US
dc.description.provenanceSubmitted by Ebru Kaya (ebrukaya@bilkent.edu.tr) on 2019-01-29T08:33:13Z No. of bitstreams: 1 Reinforcement Learning as a Means of Dynamic Aggregate QoS Provisioning.pdf: 645367 bytes, checksum: fa42af3567abe0760c99b5543b543b75 (MD5)en
dc.description.provenanceMade available in DSpace on 2019-01-29T08:33:13Z (GMT). No. of bitstreams: 1 Reinforcement Learning as a Means of Dynamic Aggregate QoS Provisioning.pdf: 645367 bytes, checksum: fa42af3567abe0760c99b5543b543b75 (MD5) Previous issue date: 2003en
dc.description.sponsorshipThis work is supported by The Scientific and Technical Research Council of Turkey (TUBITAK) under grant EEEAG-101E048en_US
dc.identifier.doi10.1007/3-540-45020-3_8en_US
dc.identifier.urihttp://hdl.handle.net/11693/48467
dc.language.isoEnglishen_US
dc.publisherSpringeren_US
dc.relation.isversionofhttps://doi.org/10.1007/3-540-45020-3_8en_US
dc.source.titleArchitectures for Quality of Service in the Internet International Workshop, Art-QoS 2003en_US
dc.subjectReinforcement learnen_US
dc.subjectAverage costen_US
dc.subjectVoice callen_US
dc.subjectLabel switch pathen_US
dc.subjectDecision epochen_US
dc.titleReinforcement learning as a means of dynamic aggregate QoS provisioningen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Reinforcement Learning as a Means of Dynamic Aggregate QoS Provisioning.pdf
Size:
630.24 KB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: