SPATL: Salient Parameter Aggregation and Transfer Learning for Heterogeneous Clients in Federated Learning

11/29/2021
by   Sixing Yu, et al.
5

Efficient federated learning is one of the key challenges for training and deploying AI models on edge devices. However, maintaining data privacy in federated learning raises several challenges including data heterogeneity, expensive communication cost, and limited resources. In this paper, we address the above issues by (a) introducing a salient parameter selection agent based on deep reinforcement learning on local clients, and aggregating the selected salient parameters on the central server, and (b) splitting a normal deep learning model (e.g., CNNs) as a shared encoder and a local predictor, and training the shared encoder through federated learning while transferring its knowledge to Non-IID clients by the local customized predictor. The proposed method (a) significantly reduces the communication overhead of federated learning and accelerates the model inference, while method (b) addresses the data heterogeneity issue in federated learning. Additionally, we leverage the gradient control mechanism to correct the gradient heterogeneity among clients. This makes the training process more stable and converge faster. The experiments show our approach yields a stable training process and achieves notable results compared with the state-of-the-art methods. Our approach significantly reduces the communication cost by up to 108 GB when training VGG-11, and needed 7.6 × less communication overhead when training ResNet-20, while accelerating the local inference by reducing up to 39.7% FLOPs on VGG-11.

READ FULL TEXT
research
12/15/2020

CosSGD: Nonlinear Quantization for Communication-efficient Federated Learning

Federated learning facilitates learning across clients without transferr...
research
03/22/2023

Prototype Helps Federated Learning: Towards Faster Convergence

Federated learning (FL) is a distributed machine learning technique in w...
research
12/18/2020

Fairness and Accuracy in Federated Learning

In the federated learning setting, multiple clients jointly train a mode...
research
04/01/2022

Federated Learning Framework Coping with Hierarchical Heterogeneity in Cooperative ITS

In this paper, we introduce a federated learning framework coping with H...
research
06/27/2020

Federated Mutual Learning

Federated learning enables collaboratively training machine learning mod...
research
01/14/2021

Auto-weighted Robust Federated Learning with Corrupted Data Sources

Federated learning provides a communication-efficient and privacy-preser...
research
05/13/2022

OFedQIT: Communication-Efficient Online Federated Learning via Quantization and Intermittent Transmission

Online federated learning (OFL) is a promising framework to collaborativ...

Please sign up or login with your details

Forgot password? Click here to reset