DeepAI AI Chat
Log In Sign Up

Analysis of Langevin Monte Carlo via convex optimization

by   Alain Durmus, et al.
University of Warsaw
Zimbra, Inc.

In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order 2. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from logconcave smooth target distribution on R^d. Our proofs are then easily extended to the Stochastic Gradient Langevin Dynamics, which is a popular extension of the Unadjusted Langevin Algorithm. Finally, this interpretation leads to a new methodology to sample from a non-smooth target distribution, for which a similar study is done.


page 1

page 2

page 3

page 4


Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum versio...

Train faster, generalize better: Stability of stochastic gradient descent

We show that parametric models trained by a stochastic gradient method (...

Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

We investigate the convergence of stochastic mirror descent (SMD) in rel...

On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

Stochastic Gradient Langevin Dynamics (SGLD) is a combination of a Robbi...

Reproducibility in Optimization: Theoretical Framework and Limits

We initiate a formal study of reproducibility in optimization. We define...

Smooth Structured Prediction Using Quantum and Classical Gibbs Samplers

We introduce a quantum algorithm for solving structured-prediction probl...