Research Area:  Machine Learning
This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning (ML) model based on their locally distributed data samples. During the distributed training, we exploit the joint communication and computation design for improving the system energy efficiency, in which both the communication resource allocation for global ML-parameters aggregation and the computation resource allocation for locally updating ML-parameters are jointly optimized. In particular, we consider two transmission protocols for edge devices to upload ML-parameters to edge server, based on the non-orthogonal multiple access (NOMA) and time division multiple access (TDMA), respectively. Under both protocols, we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy, by jointly optimizing the transmission power and rates at edge devices for uploading ML-parameters and their central processing unit (CPU) frequencies for local update. We propose efficient algorithms to solve the formulated energy minimization problems by using the techniques from convex optimization. Numerical results show that as compared to other benchmark schemes, our proposed joint communication and computation design significantly can improve the energy efficiency of the federated edge learning system, by properly balancing the energy tradeoff between communication and computation.
Keywords:  
Author(s) Name:  Xiaopeng Mo; Jie Xu
Journal name:  Journal of Communications and Information Networks
Conferrence name:  
Publisher name:  IEEE
DOI:  10.23919/JCIN.2021.9475121
Volume Information:  ( Volume: 6, Issue: 2, June 2021) Page(s): 110 - 124
Paper Link:   https://ieeexplore.ieee.org/document/9475121