Research Area:  Machine Learning
Industrial wireless networks are pushing towards distributed architectures moving beyond traditional server-client transactions. Paired with this trend, new synergies are emerging among sensing, communications and Machine Learning (ML) co-design, where resources need to be distributed across different wireless field devices, acting as both data producers and learners. Considering this landscape, Federated Learning (FL) solutions are suitable for training a ML model in distributed systems. In particular, decentralized FL policies target scenarios where learning operations must be implemented collaboratively, without relying on the server, and by exchanging model parameters updates rather than training data over capacity-constrained radio links. This paper proposes a real-time framework for the analysis of decentralized FL systems running on top of industrial wireless networks rooted in the popular Time Slotted Channel Hopping (TSCH) radio interface of the IEEE 802.15.4e standard. The proposed framework is suitable for neural networks trained via distributed Stochastic Gradient Descent (SGD), it quantifies the effects of model pruning, sparsification and quantization, as well as physical and link layer constraints, on FL convergence time and learning loss. The goal is to set the fundamentals for comprehensive methods and procedures supporting decentralized FL pre-deployment design. The proposed tool can be thus used to optimize the deployment of the wireless network and the ML model before its actual installation. It has been verified based on real data targeting smart robotic-assisted manufacturing.
Author(s) Name:  Stefano Savazzi; Sanaz Kianoush; Vittorio Rampa; Mehdi Bennis
Conferrence name:   IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD)
Publisher name:  IEEE
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9209305