Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization

03/30/2022
by   Yuri Kinoshita, et al.
0

The stochastic gradient Langevin Dynamics is one of the most fundamental algorithms to solve sampling problems and non-convex optimization appearing in several machine learning applications. Especially, its variance reduced versions have nowadays gained particular attention. In this paper, we study two variants of this kind, namely, the Stochastic Variance Reduced Gradient Langevin Dynamics and the Stochastic Recursive Gradient Langevin Dynamics. We prove their convergence to the objective distribution in terms of KL-divergence under the sole assumptions of smoothness and Log-Sobolev inequality which are weaker conditions than those used in prior works for these algorithms. With the batch size and the inner loop length set to √(n), the gradient complexity to achieve an ϵ-precision is Õ((n+dn^1/2ϵ^-1)γ^2 L^2α^-2), which is an improvement from any previous analyses. We also show some essential applications of our result to non-convex optimization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset