Tight Auditing of Differentially Private Machine Learning

02/15/2023
by   Milad Nasr, et al.
0

Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (e.g., a fully adversarial dataset). Second, they require thousands or millions of training runs to produce non-trivial statistical estimates of the privacy leakage. This work addresses both issues. We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets – if the adversary can see all model updates during training. Prior auditing works rely on the same assumption, which is permitted under the standard differential privacy threat model. This threat model is also applicable, e.g., in federated learning settings. Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy. We demonstrate the utility of our improved auditing schemes by surfacing implementation bugs in private machine learning code that eluded prior auditing techniques.

READ FULL TEXT
research
06/22/2020

Private Distributed Mean Estimation

Ever since its proposal, differential privacy has become the golden stan...
research
05/01/2020

Exploring Private Federated Learning with Laplacian Smoothing

Federated learning aims to protect data privacy by collaboratively learn...
research
03/04/2021

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...
research
02/24/2019

When Relaxations Go Bad: "Differentially-Private" Machine Learning

Differential privacy is becoming a standard notion for performing privac...
research
08/17/2020

CheckDP: An Automated and Integrated Approach for Proving Differential Privacy or Finding Precise Counterexamples

We propose CheckDP, the first automated and integrated approach for prov...
research
11/19/2018

How to Use Heuristics for Differential Privacy

We develop theory for using heuristics to solve computationally hard pro...
research
02/06/2023

One-shot Empirical Privacy Estimation for Federated Learning

Privacy auditing techniques for differentially private (DP) algorithms a...

Please sign up or login with your details

Forgot password? Click here to reset