DeepAI AI Chat
Log In Sign Up

Asymptotic bias of inexact Markov Chain Monte Carlo methods in high dimension

by   Alain Durmus, et al.

This paper establishes non-asymptotic bounds on Wasserstein distances between the invariant probability measures of inexact MCMC methods and their target distribution. In particular, the results apply to the unadjusted Langevin algorithm and to unadjusted Hamiltonian Monte Carlo, but also to methods relying on other discretization schemes. Our focus is on understanding the precise dependence of the accuracy on both the dimension and the discretization step size. We show that the dimension dependence relies on some key quantities. As a consequence, the same dependence on the step size and the dimension as in the product case can be recovered for several important classes of models. On the other hand, for more general models, the dimension dependence of the asymptotic bias may be worse than in the product case even if the exact dynamics has dimension-free mixing properties.


page 1

page 2

page 3

page 4


Mixing Time Guarantees for Unadjusted Hamiltonian Monte Carlo

We provide quantitative upper bounds on the total variation mixing time ...

On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method

The randomized midpoint method, proposed by [SL19], has emerged as an op...

(Non-) asymptotic properties of Stochastic Gradient Langevin Dynamics

Applying standard Markov chain Monte Carlo (MCMC) algorithms to large da...

Newtonian Monte Carlo: single-site MCMC meets second-order gradient methods

Single-site Markov Chain Monte Carlo (MCMC) is a variant of MCMC in whic...

On the geometric convergence for MALA under verifiable conditions

While the Metropolis Adjusted Langevin Algorithm (MALA) is a popular and...

Bias-Variance Trade-off and Overlearning in Dynamic Decision Problems

Modern Monte Carlo-type approaches to dynamic decision problems face the...