Federated learning is a new research area, partly connected to transfer learning, and it is the ability to distribute the learning to edge devices or servers. Traditional machine learning and deep learning models face the data privacy issue while labeling data from highly protected industries such as finance and healthcare. Such constraints are overcome by federated learning by using data from various organizations and training the model as a centralized model with locally stored data. The main significance of federated learning is preventing the leakage of private information. Other critical issues solved by federated learning are data privacy, data security, data access rights, and access to heterogeneous data.
The two subcategories of federated learning are horizontal federated learning and vertical federated learning. Federated Transfer Learning (FTL) is the intersection of transfer learning and data privacy. FTL utilizes different datasets with similar nature and leverages the knowledge for other domains. The main motive of the FTL is to reduce the overlapping features while transferring knowledge across domains. FTL provides strict data protection and is also implemented with deep learning architectures.
Some of the real-world application areas of FTL are wearable healthcare, EEG signal classification, autonomous driving, and image steganalysis. Future scopes of FTL are advanced machine learning models and deep learning datasets for more practical applications.