Three Variants of Differential Privacy: Lossless Conversion and Applications

08/14/2020
by   Shahab Asoodeh, et al.
0

We consider three different variants of differential privacy (DP), namely approximate DP, Rényi DP (RDP), and hypothesis test DP. In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two f-divergences that underlie the approximate DP and RDP. In particular, this enables us to derive the optimal approximate DP parameters of a mechanism that satisfies a given level of RDP. As an application, we apply our result to the moments accountant framework for characterizing privacy guarantees of noisy stochastic gradient descent (SGD). When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget. In the second part, we establish a relationship between RDP and hypothesis test DP which allows us to translate the RDP constraint into a tradeoff between type I and type II error probabilities of a certain binary hypothesis test. We then demonstrate that for noisy SGD our result leads to tighter privacy guarantees compared to the recently proposed f-DP framework for some range of parameters.

READ FULL TEXT
research
01/16/2020

A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via f-Divergences

We derive the optimal differential privacy (DP) parameters of a mechanis...
research
01/28/2022

Differential Privacy Guarantees for Stochastic Gradient Langevin Dynamics

We analyse the privacy leakage of noisy stochastic gradient descent by m...
research
10/07/2021

Complex-valued deep learning with differential privacy

We present ζ-DP, an extension of differential privacy (DP) to complex-va...
research
07/22/2020

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

Deep learning models leak significant amounts of information about their...
research
04/07/2022

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

We investigate and leverage a connection between Differential Privacy (D...
research
08/09/2021

Canonical Noise Distributions and Private Hypothesis Tests

f-DP has recently been proposed as a generalization of classical definit...
research
05/17/2023

Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses

The Noisy-SGD algorithm is widely used for privately training machine le...

Please sign up or login with your details

Forgot password? Click here to reset