Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems - 2020

Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems

Research Area:  Cloud Computing

Abstract:

Cloud computing is undeniably becoming the main computing and storage platform for todays major workloads. From Internet of things and Industry 4.0 workloads to big data analytics and decision-making jobs, cloud systems daily receive a massive number of tasks that need to be simultaneously and efficiently mapped onto the cloud resources. Therefore, deriving an appropriate task scheduling mechanism that can both minimize tasks execution delay and cloud resources utilization is of prime importance. Recently, the concept of cloud automation has emerged to reduce the manual intervention and improve the resource management in large-scale cloud computing workloads. In this article, we capitalize on this concept and propose four deep and reinforcement learning-based scheduling approaches to automate the process of scheduling large-scale workloads onto cloud computing resources, while reducing both the resource consumption and task waiting time. These approaches are: reinforcement learning (RL), deep Q networks, recurrent neural network long short-term memory (RNN-LSTM), and deep reinforcement learning combined with LSTM (DRL-LSTM). Experiments conducted using real-world datasets from Google Cloud Platform revealed that DRL-LSTM outperforms the other three approaches. The experiments also showed that DRL-LSTM minimizes the CPU usage cost up to 67% compared with the shortest job first (SJF), and up to 35% compared with both the round robin (RR) and improved particle swarm optimization (PSO) approaches. Moreover, our DRL-LSTM solution decreases the RAM memory usage cost up to 72% compared with the SJF, up to 65% compared with the RR, and up to 31.25% compared with the improved PSO.

Keywords:  

Author(s) Name:  Gaith Rjoub, Jamal Bentahar, Omar Abdel Wahab, Ahmed Saleh Bataineh

Journal name:  Concurrency and Computation: Practice and Experience

Conferrence name:  

Publisher name:  Wiley

DOI:  10.1002/cpe.5919

Volume Information:  Volume33, Issue23