Research Area:  Machine Learning
The traditional cloud computing model can no longer satisfy the current demand due to the limitations of backhaul bandwidth and high latency. Therefore, a new fog computing architecture is proposed, which relieves the bandwidth load and energy pressure on the backhaul link by reducing the number of communications between the cloud computing center and the users. The latency is drastically reduced by proximity to the devices. However, the performance of fog computing is highly dependent on a variety of resource allocation strategies. Therefore, task offloading strategies and resource allocation strategies are a great challenge. In this paper, we use the advantage actor-critic (A2C) algorithm in Deep reinforcement learning (DRL) to jointly optimize the offloading strategy, and network resource allocation strategy to reduce latency for dependent computational tasks in fog computing. One of the major challenges of such problems is that there are multiple action dimensions, which makes it difficult to converge the network. Therefore, this paper uses the multi-agent method to simplify the problem by splitting the complete offload decision action into three sub-actions. We demonstrate through numerical simulations that the algorithm can effectively reduce the cost and also discuss the effects of the different number of devices and the different number of fog nodes on the cost.
Author(s) Name:  Wenle Bai; Cheng Qian
Conferrence name:  IEEE 12th International Conference on Software Engineering and Service Science
Publisher name:  IEEE
Paper Link:   https://ieeexplore.ieee.org/document/9522334