Research Area:  Machine Learning
Federated learning enables many applications benefiting distributed and private datasets of a large number of potential data-holding clients. However, different clients usually have their own particular objectives in terms of the tasks to be learned from the data. So, supporting federated learning with meta-learning tools such as multi-task learning and transfer learning will help enlarge the set of potential applications of federated learning by letting clients of different but related tasks share task-agnostic models that can be then further updated and tailored by each individual client for its particular task. In a federated multi-task learning problem, the trained deep neural network model should be fine-tuned for the respective objective of each client while sharing some parameters for more generalizability. We propose to train a deep neural network model with more generalized layers closer to the input and more personalized layers to the output. We achieve that by introducing layer types such as pre-trained, common, task-specific, and personal layers. We provide simulation results to highlight particular scenarios in which meta-learning-based federated learning proves to be useful.
Keywords:  
Multi-Task
Federated Learning Applications
Transfer Learning
Federated Learning
Meta-learning tools
Author(s) Name:  Cihat Keçeci, Mohammad Shaqfeh, Hayat Mbayed, Erchin Serpedin
Journal name:  Machine Learning
Conferrence name:  
Publisher name:  arXiv.2207.08147
DOI:  10.48550/arXiv.2207.08147
Volume Information:  
Paper Link:   https://arxiv.org/abs/2207.08147