One-shot Empirical Privacy Estimation for Federated Learning

02/06/2023
by   Galen Andrew, et al.
0

Privacy auditing techniques for differentially private (DP) algorithms are useful for estimating the privacy loss to compare against analytical bounds, or empirically measure privacy in settings where known analytical bounds on the DP loss are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g., knowledge of intermediate model iterates or the training data distribution), are tailored to specific tasks and model architectures, and require retraining the model many times (typically on the order of thousands). These shortcomings make deploying such techniques at scale difficult in practice, especially in federated settings where model training can take days or weeks. In this work, we present a novel "one-shot" approach that can systematically address these challenges, allowing efficient auditing or estimation of the privacy loss of a model during the same, single training run used to fit model parameters. Our privacy auditing method for federated learning does not require a priori knowledge about the model architecture or task. We show that our method provides provably correct estimates for privacy loss under the Gaussian mechanism, and we demonstrate its performance on a well-established FL benchmark dataset under several adversarial models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2020

Exploring Private Federated Learning with Laplacian Smoothing

Federated learning aims to protect data privacy by collaboratively learn...
research
02/15/2022

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Differentially private federated learning (DP-FL) has received increasin...
research
11/29/2022

Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise

Federated learning seeks to address the issue of isolated data islands b...
research
02/02/2023

On the Efficacy of Differentially Private Few-shot Image Classification

There has been significant recent progress in training differentially pr...
research
03/20/2023

Make Landscape Flatter in Differentially Private Federated Learning

To defend the inference attacks and mitigate the sensitive information l...
research
02/15/2023

Tight Auditing of Differentially Private Machine Learning

Auditing mechanisms for differential privacy use probabilistic means to ...
research
05/10/2022

Privacy Enhancement for Cloud-Based Few-Shot Learning

Requiring less data for accurate models, few-shot learning has shown rob...

Please sign up or login with your details

Forgot password? Click here to reset