Research Area:  Machine Learning
Due to the rapid growth of intelligent devices and the Internet of Things (IoT) applications in recent years, the volume of data that is generated by these devices is increasing ceaselessly. Hence, moving all of these data to cloud datacenters would be impossible and would lead to more bandwidth usage, latency, cost, and energy consumption. In such cases, the fog layer would be the best place for data processing. In the fog layer, the computing equipment dedicates parts of its limited resources to process the IoT application tasks. Therefore, efficient utilization of computing resources is of great importance and requires an optimal and intelligent strategy for task scheduling. In this paper, we have focused on the task scheduling of fog-based IoT applications with the aim of minimizing long-term service delay and computation cost under the resource and deadline constraints. To address this problem, we have used the reinforcement learning approach and have proposed a Double Deep Q-Learning (DDQL)-based scheduling algorithm using the target network and experience replay techniques. The evaluation results reveal that our proposed algorithm outperforms some baseline algorithms in terms of service delay, computation cost, energy consumption and task accomplishment and also handles the Single Point of Failure (SPoF) and load balancing challenges.
Keywords:  
Double Deep Q-Learning (DDQL)
scheduling algorithm
Fog
Iot Applications
Deep Reinforcement Learning
Author(s) Name:  PegahGazoriDadmehrRahbariMohsenNickray
Journal name:  Future Generation Computer Systems
Conferrence name:  
Publisher name:  ELSEVIER
DOI:  https://doi.org/10.1016/j.future.2019.09.060
Volume Information:  Volume 110, September 2020, Pages 1098-1115
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S0167739X19308702