Convergence of Langevin MCMC in KL-divergence

05/25/2017
by   Xiang Cheng, et al.
0

Langevin diffusion is a commonly used tool for sampling from a given distribution. In this work, we establish that when the target density p^* is such that p^* is L smooth and m strongly convex, discrete Langevin diffusion produces a distribution p with KL(p||p^*)≤ϵ in Õ(d/ϵ) steps, where d is the dimension of the sample space. We also study the convergence rate when the strong-convexity assumption is absent. By considering the Langevin diffusion as a gradient flow in the space of probability distributions, we obtain an elegant analysis that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro