Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

06/02/2022
by   Raman Arora, et al.
0

We study the problem of approximating stationary points of Lipschitz and smooth functions under (ε,δ)-differential privacy (DP) in both the finite-sum and stochastic settings. A point w is called an α-stationary point of a function F:ℝ^d→ℝ if ∇ F(w)≤α. We provide a new efficient algorithm that finds an Õ([√(d)/nε]^2/3)-stationary point in the finite-sum setting, where n is the number of samples. This improves on the previous best rate of Õ([√(d)/nε]^1/2). We also give a new construction that improves over the existing rates in the stochastic optimization setting, where the goal is to find approximate stationary points of the population risk. Our construction finds a Õ(1/n^1/3 + [√(d)/nε]^1/2)-stationary point of the population risk in time linear in n. Furthermore, under the additional assumption of convexity, we completely characterize the sample complexity of finding stationary points of the population risk (up to polylog factors) and show that the optimal rate on population stationarity is Θ̃(1/√(n)+√(d)/nε). Finally, we show that our methods can be used to provide dimension-independent rates of O(1/√(n)+min([√(rank)/nε]^2/3,1/(nε)^2/5)) on population stationarity for Generalized Linear Models (GLM), where rank is the rank of the design matrix, which improves upon the previous best known rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2021

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

We study differentially private stochastic optimization in convex and no...
research
05/06/2022

Differentially Private Generalized Linear Models Revisited

We study the problem of (ϵ,δ)-differentially private learning of linear ...
research
07/31/2021

Faster Rates of Differentially Private Stochastic Convex Optimization

In this paper, we revisit the problem of Differentially Private Stochast...
research
07/20/2023

From Adaptive Query Release to Machine Unlearning

We formalize the problem of machine unlearning as design of efficient un...
research
10/17/2018

Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

We investigate the stochastic optimization problem of minimizing populat...

Please sign up or login with your details

Forgot password? Click here to reset