Research Area:  Fog Computing
Heterogeneous computing powered by remote clouds and local fogs is a promising technology to improve the performance of user terminals in the Internet of Things. In this paper, two semi-Markov decision process (SMDP)-based coordinated virtual machine (VM) allocation methods are proposed to balance the tradeoff between the high cost of providing services by the remote cloud and the limited computing capacity of the local fog. We first present a model-based planning method in which it is necessary to train the state transition probabilities and the expected time intervals between adjacent decision epochs. To facilitate training them, the SMDP is degraded into a continuous-time Markov decision process (CTMDP) in which the service requests and ongoing service completions follow a continuous-time Markov chain. The relative value iterative algorithm for the CTMDP is used to find an asymptotically optimal VM allocation policy. In addition, we also propose a model-free reinforcement learning (RL) method, where an optimal coordinated VM allocation policy is approximated by learning from the states and rewards of feedback. The simulation results show that the performance of the model-free RL method can converge to a level similar to that of the model-based planning method and outperform the greedy VM allocation method.
Keywords:  
Author(s) Name:  Qizhen Li; Lianwen Zhao; Jie Gao; Hongbin Liang; Lian Zhao; Xiaohu Tang
Journal name:  IEEE Internet of Things Journal
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/JIOT.2018.2818680
Volume Information:  Volume: 5, Issue: 3, June 2018, Page(s): 1977 - 1988
Paper Link:   https://ieeexplore.ieee.org/abstract/document/8331089