Per-Instance Privacy Accounting for Differentially Private Stochastic Gradient Descent

06/06/2022
by   Da Yu, et al.
12

Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent advances in private deep learning. It provides a single privacy guarantee to all datapoints in the dataset. We propose an efficient algorithm to compute per-instance privacy guarantees for individual examples when running DP-SGD. We use our algorithm to investigate per-instance privacy losses across a number of datasets. We find that most examples enjoy stronger privacy guarantees than the worst-case bounds. We further discover that the loss and the privacy loss on an example are well-correlated. This implies groups that are underserved in terms of model utility are simultaneously underserved in terms of privacy loss. For example, on CIFAR-10, the average ϵ of the class with the highest loss (Cat) is 32 the class with the lowest loss (Ship). We also run membership inference attacks to show this reflects disparate empirical privacy risks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset