Research Area:  Machine Learning
In federated learning, heterogeneity among the clients’ local datasets results in large variations in the number of local updates performed by each client in a communication round. Simply aggregating such local models into a global model will confine the capacity of the system, that is, the single global model will be restricted from delivering good performance on each client’s task. This paper provides a general framework to analyze the convergence of personalized federated learning algorithms. It subsumes previously proposed methods and provides a principled understanding of the computational guarantees. Using insights from this analysis, we propose PFedAtt, a personalized federated learning method that incorporates attention-based grouping to facilitate similar clients’ collaborations. Theoretically, we provide the convergence guarantee for the algorithm, and empirical experiments corroborate the competitive performance of PFedAtt on heterogeneous clients.
Author(s) Name:  Zichen Ma, Yu Lu, Wenye Li, Jinfeng Yi, Shuguang Cui
Conferrence name:  Proceedings of The 13th Asian Conference on Machine Learning
Publisher name:  MLResearchPress
Volume Information:  Volume 157
Paper Link:   https://proceedings.mlr.press/v157/ma21a.html