Efficient Federated Learning for AIoT Applications Using Knowledge Distillation

11/29/2021
by   Tian Liu, et al.
2

As a promising distributed machine learning paradigm, Federated Learning (FL) trains a central model with decentralized data without compromising user privacy, which has made it widely used by Artificial Intelligence Internet of Things (AIoT) applications. However, the traditional FL suffers from model inaccuracy since it trains local models using hard labels of data and ignores useful information of incorrect predictions with small probabilities. Although various solutions try to tackle the bottleneck of the traditional FL, most of them introduce significant communication and memory overhead, making the deployment of large-scale AIoT devices a great challenge. To address the above problem, this paper presents a novel Distillation-based Federated Learning (DFL) architecture that enables efficient and accurate FL for AIoT applications. Inspired by Knowledge Distillation (KD) that can increase the model accuracy, our approach adds the soft targets used by KD to the FL model training, which occupies negligible network resources. The soft targets are generated by local sample predictions of each AIoT device after each round of local training and used for the next round of model training. During the local training of DFL, both soft targets and hard labels are used as approximation objectives of model predictions to improve model accuracy by supplementing the knowledge of soft targets. To further improve the performance of our DFL model, we design a dynamic adjustment strategy for tuning the ratio of two loss functions used in KD, which can maximize the use of both soft targets and hard labels. Comprehensive experimental results on well-known benchmarks show that our approach can significantly improve the model accuracy of FL with both Independent and Identically Distributed (IID) and non-IID data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2022

FedEntropy: Efficient Device Grouping for Federated Learning Using Maximum Entropy Judgment

Along with the popularity of Artificial Intelligence (AI) and Internet-o...
research
12/05/2022

HierarchyFL: Heterogeneous Federated Learning via Hierarchical Self-Distillation

Federated learning (FL) has been recognized as a privacy-preserving dist...
research
06/27/2021

Reward-Based 1-bit Compressed Federated Distillation on Blockchain

The recent advent of various forms of Federated Knowledge Distillation (...
research
07/23/2023

ProtoFL: Unsupervised Federated Learning via Prototypical Distillation

Federated learning (FL) is a promising approach for enhancing data priva...
research
03/05/2021

Distributed Dynamic Map Fusion via Federated Learning for Intelligent Networked Vehicles

The technology of dynamic map fusion among networked vehicles has been d...
research
08/25/2022

Towards Federated Learning against Noisy Labels via Local Self-Regularization

Federated learning (FL) aims to learn joint knowledge from a large scale...
research
11/14/2022

FedCL: Federated Multi-Phase Curriculum Learning to Synchronously Correlate User Heterogeneity

Federated Learning (FL) is a new decentralized learning used for trainin...

Please sign up or login with your details

Forgot password? Click here to reset