Research Area:  Machine Learning
Deep learning has been applied in many areas, such as computer vision, natural language processing and emotion analysis. Differing from the traditional deep learning that collects user data centrally, federated deep learning requires participants to train the networks on private datasets and share the training results, and hence has more gratifying efficiency and stronger security. However, it still presents some privacy issues since adversaries can deduce users privacy from local outputs, such as gradients. While the problem of private federated deep learning has been an active research issue, the latest research findings are still inadequate in terms of security, accuracy and efficiency. In this paper, we propose an efficient and privacy-preserving federated deep learning protocol based on stochastic gradient descent method by integrating the additively homomorphic encryption with differential privacy. Specifically, users add noises to each local gradients before encrypting them to obtain the optical performance and security. Moreover, our scheme is secure to honest-but-curious server setting even if the cloud server colludes with multiple users. Besides, our scheme supports federated learning for large-scale users scenarios and extensive experiments demonstrate our scheme has high efficiency and high accuracy compared with non-private model.
Keywords:  
Author(s) Name:  Meng Hao; Hongwei Li; Guowen Xu; Sen Liu; Haomiao Yang
Journal name:  
Conferrence name:  ICC 2019 - 2019 IEEE International Conference on Communications (ICC)
Publisher name:  IEEE
DOI:  10.1109/ICC.2019.8761267
Volume Information:  
Paper Link:   https://ieeexplore.ieee.org/abstract/document/8761267