DeepAI AI Chat
Log In Sign Up

Analysis of Langevin Monte Carlo via convex optimization

02/26/2018
by   Alain Durmus, et al.
ens-cachan.fr
University of Warsaw
Zimbra, Inc.
0

In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order 2. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from logconcave smooth target distribution on R^d. Our proofs are then easily extended to the Stochastic Gradient Langevin Dynamics, which is a popular extension of the Unadjusted Langevin Algorithm. Finally, this interpretation leads to a new methodology to sample from a non-smooth target distribution, for which a similar study is done.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/25/2019

Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum versio...
09/03/2015

Train faster, generalize better: Stability of stochastic gradient descent

We show that parametric models trained by a stochastic gradient method (...
10/28/2021

Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

We investigate the convergence of stochastic mirror descent (SMD) in rel...
12/06/2018

On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

Stochastic Gradient Langevin Dynamics (SGLD) is a combination of a Robbi...
02/09/2022

Reproducibility in Optimization: Theoretical Framework and Limits

We initiate a formal study of reproducibility in optimization. We define...
09/11/2018

Smooth Structured Prediction Using Quantum and Classical Gibbs Samplers

We introduce a quantum algorithm for solving structured-prediction probl...