On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

12/06/2018
by   M. Barkhagen, et al.
0

Stochastic Gradient Langevin Dynamics (SGLD) is a combination of a Robbins-Monro type algorithm with Langevin dynamics in order to perform data-driven stochastic optimization. In this paper, the SGLD method with fixed step size λ is considered in order to sample from a logconcave target distribution π, known up to a normalisation factor. We assume that unbiased estimates of the gradient from possibly dependent observations are available. It is shown that, for all ε>0, the Wasserstein-2 distance of the nth iterate of the SGLD algorithm from π is dominated by c_1(ε)[λ^1/2 - ε+e^-aλ n] with appropriate constants c_1(ε), a>0.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2019

On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case

We consider the problem of sampling from a target distribution which is ...
research
05/04/2021

On the stability of the stochastic gradient Langevin algorithm with dependent data stream

We prove, under mild conditions, that the stochastic gradient Langevin d...
research
09/17/2018

Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

In this paper, we propose and analyze zeroth-order stochastic approximat...
research
01/17/2023

Geometric ergodicity of SGLD via reflection coupling

We consider the geometric ergodicity of the Stochastic Gradient Langevin...
research
07/19/2022

A sharp uniform-in-time error estimate for Stochastic Gradient Langevin Dynamics

We establish a sharp uniform-in-time error estimate for the Stochastic G...
research
11/27/2018

Reliable uncertainty estimate for antibiotic resistance classification with Stochastic Gradient Langevin Dynamics

Antibiotic resistance monitoring is of paramount importance in the face ...
research
07/02/2020

A fully data-driven approach to minimizing CVaR for portfolio of assets via SGLD with discontinuous updating

A new approach in stochastic optimization via the use of stochastic grad...

Please sign up or login with your details

Forgot password? Click here to reset