Private Adaptive Gradient Methods for Convex Optimization

06/25/2021
by   Hilal Asi, et al.
0

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm. We provide upper bounds on the regret of both algorithms and show that the bounds are (worst-case) optimal. As a consequence of our development, we show that our private versions of AdaGrad outperform adaptive SGD, which in turn outperforms traditional SGD in scenarios with non-isotropic gradients where (non-private) Adagrad provably outperforms SGD. The major challenge is that the isotropic noise typically added for privacy dominates the signal in gradient geometry for high-dimensional problems; approaches to this that effectively optimize over lower-dimensional subspaces simply ignore the actual problems that varying gradient geometries introduce. In contrast, we study non-isotropic clipping and noise addition, developing a principled theoretical approach; the consequent procedures also enjoy significantly stronger empirical performance than prior approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2019

AdaCliP: Adaptive Clipping for Private SGD

Privacy preserving machine learning algorithms are crucial for learning ...
research
06/12/2020

Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses

Uniform stability is a notion of algorithmic stability that bounds the w...
research
10/14/2021

Adaptive Differentially Private Empirical Risk Minimization

We propose an adaptive (stochastic) gradient perturbation method for dif...
research
06/15/2016

Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics

While significant progress has been made separately on analytics systems...
research
10/12/2022

Differentially Private Online-to-Batch for Smooth Losses

We develop a new reduction that converts any online convex optimization ...
research
03/14/2018

Model-Agnostic Private Learning via Stability

We design differentially private learning algorithms that are agnostic t...
research
02/25/2021

Machine Unlearning via Algorithmic Stability

We study the problem of machine unlearning and identify a notion of algo...

Please sign up or login with your details

Forgot password? Click here to reset