Research Area:  Machine Learning
Federated learning has emerged as a promising approach for collaborative and privacy-preserving learning. Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, which they might want to keep private. However, parameter interaction and the resulting model still might disclose information about the training data used. To address these privacy concerns, several approaches have been proposed based on differential privacy and secure multiparty computation (SMC), among others. They often result in large communication overhead and slow training time. In this paper, we propose HybridAlpha, an approach for privacy-preserving federated learning employing an SMC protocol based on functional encryption. This protocol is simple, efficient and resilient to participants dropping out. We evaluate our approach regarding the training time and data volume exchanged using a federated learning process to train a CNN on the MNIST data set. Evaluation against existing crypto-based SMC solutions shows that HybridAlpha can reduce the training time by 68% and data transfer volume by 92% on average while providing the same model performance and privacy guarantees as the existing solutions.
Keywords:  
Author(s) Name:  Runhua Xu , Nathalie Baracaldo , Yi Zhou , Ali Anwar , Heiko Ludwig
Journal name:  
Conferrence name:  AISec-19: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security
Publisher name:  ACM
DOI:  10.1145/3338501.3357371
Volume Information:  
Paper Link:   https://dl.acm.org/doi/abs/10.1145/3338501.3357371