(Non-) asymptotic properties of Stochastic Gradient Langevin Dynamics

01/02/2015
by   Sebastian J. Vollmer, et al.
0

Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally infeasible. The recently proposed stochastic gradient Langevin dynamics (SGLD) method circumvents this problem in three ways: it generates proposed moves using only a subset of the data, it skips the Metropolis-Hastings accept-reject step, and it uses sequences of decreasing step sizes. In TehThierryVollmerSGLD2014, we provided the mathematical foundations for the decreasing step size SGLD, including consistency and a central limit theorem. However, in practice the SGLD is run for a relatively small number of iterations, and its step size is not decreased to zero. The present article investigates the behaviour of the SGLD with fixed step size. In particular we characterise the asymptotic bias explicitly, along with its dependence on the step size and the variance of the stochastic gradient. On that basis a modified SGLD which removes the asymptotic bias due to the variance of the stochastic gradients up to first order in the step size is derived. Moreover, we are able to obtain bounds on the finite-time bias, variance and mean squared error (MSE). The theory is illustrated with a Gaussian toy model for which the bias and the MSE for the estimation of moments can be obtained explicitly. For this toy model we study the gain of the SGLD over the standard Euler method in the limit of large data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2021

On Riemannian Stochastic Approximation Schemes with Fixed Step-Size

This paper studies fixed step-size stochastic approximation (SA) schemes...
research
10/21/2016

On the Convergence of Stochastic Gradient MCMC Algorithms with High-Order Integrators

Recent advances in Bayesian learning with large-scale data have witnesse...
research
10/05/2022

Functional Central Limit Theorem and Strong Law of Large Numbers for Stochastic Gradient Langevin Dynamics

We study the mixing properties of an important optimization algorithm of...
research
07/25/2022

Statistical Inference with Stochastic Gradient Algorithms

Tuning of stochastic gradient algorithms (SGAs) for optimization and sam...
research
11/29/2014

Constant Step Size Least-Mean-Square: Bias-Variance Trade-offs and Optimal Sampling Distributions

We consider the least-squares regression problem and provide a detailed ...
research
08/02/2021

Asymptotic bias of inexact Markov Chain Monte Carlo methods in high dimension

This paper establishes non-asymptotic bounds on Wasserstein distances be...
research
11/25/2018

The promises and pitfalls of Stochastic Gradient Langevin Dynamics

Stochastic Gradient Langevin Dynamics (SGLD) has emerged as a key MCMC a...

Please sign up or login with your details

Forgot password? Click here to reset