A comparative study of deep learning architectures for multivariate cloud workload prediction
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Volume
Issue
Pages
Language
Type
Journal Title
Journal ISSN
Volume Title
Attention Stats
Usage Stats
views
downloads
Series
Abstract
Cloud computing and use of cloud data centers are in high demand due to their benefits to customers including but not limited to low cost, high availability, reliability, robustness and scalability. Cloud service providers are obliged to fulfill service level agreements that promise high quality of service to their customers. This brings out the need for effective and efficient utilization of data center resources, especially the resources of the compute servers. To achieve proactive and effective resource allocation and scaling policies, accurate prediction of workloads in cloud computing environments plays a critical role. Cloud workload prediction is a challenging task due to high dimensionality, variance and complexity of the workload data. In addition, workload prediction models are expected to work with sufficient amount of past observations to correctly learn workload patterns, at the same time, handle longer forecast horizons accurately. In order to tackle this problem and address the challenges, we investigated and compared five deep learning-based schemes for multivariate time series forecasting to predict the CPU utilization of virtual machines in cloud data centers. The performance of the deep learning schemes is analyzed and compared by using two real-world data sets: Alibaba cluster trace and Bitbrains trace. Our study reveals the relative strengths and weaknesses of the compared schemes for cloud workload prediction. We also observed that, among the compared schemes, Encoder-Decoder LSTM Network with Attention is a more effective solution for workload prediction in cloud computing.