Research Area:  Machine Learning
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
Author(s) Name:  Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith
Conferrence name:  Proceedings of the 38th International Conference on Machine Learning
Publisher name:  MLResearchPress
Volume Information:   Volume 139
Paper Link:   https://proceedings.mlr.press/v139/li21h.html