Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses

05/17/2023
by   Shahab Asoodeh, et al.
0

The Noisy-SGD algorithm is widely used for privately training machine learning models. Traditional privacy analyses of this algorithm assume that the internal state is publicly revealed, resulting in privacy loss bounds that increase indefinitely with the number of iterations. However, recent findings have shown that if the internal state remains hidden, then the privacy loss might remain bounded. Nevertheless, this remarkable result heavily relies on the assumption of (strong) convexity of the loss function. It remains an important open problem to further relax this condition while proving similar convergent upper bounds on the privacy loss. In this work, we address this problem for DP-SGD, a popular variant of Noisy-SGD that incorporates gradient clipping to limit the impact of individual samples on the training process. Our findings demonstrate that the privacy loss of projected DP-SGD converges exponentially fast, without requiring convexity or smoothness assumptions on the loss function. In addition, we analyze the privacy loss of regularized (unprojected) DP-SGD. To obtain these results, we directly analyze the hockey-stick divergence between coupled stochastic processes by relying on non-linear data processing inequalities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2022

Differential Privacy Guarantees for Stochastic Gradient Langevin Dynamics

We analyse the privacy leakage of noisy stochastic gradient descent by m...
research
01/22/2021

Differentially Private SGD with Non-Smooth Loss

In this paper, we are concerned with differentially private SGD algorith...
research
03/10/2022

Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)

Differential privacy analysis of randomized learning algorithms typicall...
research
08/14/2020

Three Variants of Differential Privacy: Lossless Conversion and Applications

We consider three different variants of differential privacy (DP), namel...
research
05/27/2022

Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss

A central issue in machine learning is how to train models on sensitive ...
research
12/05/2019

On the Intrinsic Privacy of Stochastic Gradient Descent

Private learning algorithms have been proposed that ensure strong differ...
research
06/20/2021

Privacy Amplification via Iteration for Shuffled and Online PNSGD

In this paper, we consider the framework of privacy amplification via it...

Please sign up or login with your details

Forgot password? Click here to reset