DeepAI AI Chat
Log In Sign Up

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

02/15/2022
by   Rui Hu, et al.
0

Federated learning (FL) that enables distributed clients to collaboratively learn a shared statistical model while keeping their training data locally has received great attention recently and can improve privacy and communication efficiency in comparison with traditional centralized machine learning paradigm. However, sensitive information about the training data can still be inferred from model updates shared in FL. Differential privacy (DP) is the state-of-the-art technique to defend against those attacks. The key challenge to achieve DP in FL lies in the adverse impact of DP noise on model accuracy, particularly for deep learning models with large numbers of model parameters. This paper develops a novel differentially-private FL scheme named Fed-SMP that provides client-level DP guarantee while maintaining high model accuracy. To mitigate the impact of privacy protection on model accuracy, Fed-SMP leverages a new technique called Sparsified Model Perturbation (SMP), where local models are sparsified first before being perturbed with additive Gaussian noise. Two sparsification strategies are considered in Fed-SMP: random sparsification and top-k sparsification. We also apply Rényi differential privacy to providing a tight analysis for the end-to-end DP guarantee of Fed-SMP and prove the convergence of Fed-SMP with general loss functions. Extensive experiments on real-world datasets are conducted to demonstrate the effectiveness of Fed-SMP in largely improving model accuracy with the same level of DP guarantee and saving communication cost simultaneously.

READ FULL TEXT
06/25/2021

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Providing privacy protection has been one of the primary motivations of ...
04/15/2023

Communication and Energy Efficient Wireless Federated Learning with Intrinsic Privacy

Federated Learning (FL) is a collaborative learning framework that enabl...
05/02/2023

Efficient Federated Learning with Enhanced Privacy via Lottery Ticket Pruning in Edge Computing

Federated learning (FL) is a collaborative learning paradigm for decentr...
03/07/2023

Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

While preserving the privacy of federated learning (FL), differential pr...
02/02/2023

Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy

This work proposes Fed-GLOSS-DP, a novel approach to privacy-preserving ...
05/01/2023

Towards the Flatter Landscape and Better Generalization in Federated Learning under Client-level Differential Privacy

To defend the inference attacks and mitigate the sensitive information l...
03/20/2023

Make Landscape Flatter in Differentially Private Federated Learning

To defend the inference attacks and mitigate the sensitive information l...