Research Area:  Machine Learning
Machine learning (ML) tasks are becoming ubiquitous in today-s network applications. Federated learning has emerged recently as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data. There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices. To address this, we advocate a new learning paradigm called fog learning, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers. Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity. It considers a multi-layer hybrid learning framework consisting of heterogeneous devices with various proximities. It accounts for the topology structures of the local networks among the heterogeneous nodes at each network layer, orchestrating them for collaborative/cooperative learning through device-to-device communications. This migrates from star network topologies used for parameter transfers in federated learning to more distributed topologies at scale. We discuss several open research directions toward realizing fog learning.
Author(s) Name:  Seyyedali Hosseinalipour; Christopher G. Brinton; Vaneet Aggarwal; Huaiyu Dai; Mung Chiang
Journal name:  IEEE Communications Magazine
Publisher name:  IEEE
Volume Information:  Volume: 58, Issue: 12, Page(s): 41 - 47
Paper Link:   https://ieeexplore.ieee.org/document/9311906