Privacy Amplification by Decentralization

12/09/2020
by   Edwige Cyffers, et al.
0

Analyzing data owned by several parties while achieving a good trade-off between utility and privacy is a key challenge in federated learning and analytics. In this work, we introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized protocols, i.e. participants exchange information by communicating along the edges of a network graph. This relaxation, that we call network DP, captures the fact that users have only a local view of the decentralized system. To show the relevance of network DP, we study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it. For tasks such as real summation, histogram computation and gradient descent, we propose simple algorithms and prove privacy amplification results on ring and complete topologies. The resulting privacy-utility trade-off significantly improves upon LDP, and in some cases even matches what can be achieved with approaches based on secure aggregation and secure shuffling. Our experiments confirm the practical significance of the gains compared to LDP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2022

Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging

Decentralized optimization is increasingly popular in machine learning f...
research
11/17/2021

Differentially Private Federated Learning on Heterogeneous Data

Federated Learning (FL) is a paradigm for large-scale distributed learni...
research
06/12/2020

Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties

Learning from data owned by several parties, as in federated learning, r...
research
12/17/2019

Asynchronous Federated Learning with Differential Privacy for Edge Intelligence

Federated learning has been showing as a promising approach in paving th...
research
06/20/2023

Randomized Quantization is All You Need for Differential Privacy in Federated Learning

Federated learning (FL) is a common and practical framework for learning...
research
12/06/2022

Straggler-Resilient Differentially-Private Decentralized Learning

We consider the straggler problem in decentralized learning over a logic...
research
12/23/2021

Mitigating Leakage from Data Dependent Communications in Decentralized Computing using Differential Privacy

Imagine a group of citizens willing to collectively contribute their per...

Please sign up or login with your details

Forgot password? Click here to reset