Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

12/20/2020
by   Shahab Asoodeh, et al.
0

We propose an information-theoretic technique for analyzing privacy guarantees of online algorithms. Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for f-divergences. Our technique relies on generalizing the Dobrushin's contraction coefficient for total variation distance to an f-divergence known as E_γ-divergence. E_γ-divergence, in turn, is equivalent to approximate differential privacy. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset