Bayesian Estimation of Differential Privacy

Algorithms such as Differentially Private SGD enable training machine learning models with formal privacy guarantees. However, there is a discrepancy between the protection that such algorithms guarantee in theory and the protection they afford in practice. An emerging strand of work empirically estimates the protection afforded by differentially private training as a confidence interval for the privacy budget ε spent on training a model. Existing approaches derive confidence intervals for ε from confidence intervals for the false positive and false negative rates of membership inference attacks. Unfortunately, obtaining narrow high-confidence intervals for ϵ using this method requires an impractically large sample size and training as many models as samples. We propose a novel Bayesian method that greatly reduces sample size, and adapt and validate a heuristic to draw more than one sample per trained model. Our Bayesian method exploits the hypothesis testing interpretation of differential privacy to obtain a posterior for ε (not just a confidence interval) from the joint posterior of the false positive and false negative rates of membership inference attacks. For the same sample size and confidence, we derive confidence intervals for ε around 40 adapt from label-only DP, can be used to further reduce the number of trained models needed to get enough samples by up to 2 orders of magnitude.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2018

Differentially Private Confidence Intervals for Empirical Risk Minimization

The process of data mining with differential privacy produces results th...
research
01/07/2020

Differentially Private Confidence Intervals

Confidence intervals for the population mean of normally distributed dat...
research
10/18/2018

Locally Private Mean Estimation: Z-test and Tight Confidence Intervals

This work provides tight upper- and lower-bounds for the problem of mean...
research
06/18/2021

Non-parametric Differentially Private Confidence Intervals for the Median

Differential privacy is a restriction on data processing algorithms that...
research
05/29/2023

Unleashing the Power of Randomization in Auditing Differentially Private ML

We present a rigorous methodology for auditing differentially private ma...
research
03/12/2020

Bayesian Posterior Interval Calibration to Improve the Interpretability of Observational Studies

Observational healthcare data offer the potential to estimate causal eff...
research
08/17/2022

Privacy Aware Experimentation over Sensitive Groups: A General Chi Square Approach

We study a new privacy model where users belong to certain sensitive gro...

Please sign up or login with your details

Forgot password? Click here to reset