Langevin Monte Carlo: random coordinate descent and variance reduction

07/26/2020
by   Zhiyan Ding, et al.
3

Sampling from a log-concave distribution function on ℝ^d (with d≫ 1) is a popular problem that has wide applications. In this paper we study the application of random coordinate descent method (RCD) on the Langevin Monte Carlo (LMC) sampling method, and we find two sides of the theory: 1. The direct application of RCD on LMC does reduce the number of finite differencing approximations per iteration, but it induces a large variance error term. More iterations are then needed, and ultimately the method gains no computational advantage; 2. When variance reduction techniques (such as SAGA and SVRG) are incorporated in RCD-LMC, the variance error term is reduced. The new methods, compared to the vanilla LMC, reduce the total computational cost by d folds, and achieve the optimal cost rate. We perform our investigations in both overdamped and underdamped settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2020

Variance reduction for Langevin Monte Carlo in high dimensional sampling problems

Sampling from a log-concave distribution function is one core problem th...
research
10/22/2020

Random Coordinate Underdamped Langevin Monte Carlo

The Underdamped Langevin Monte Carlo (ULMC) is a popular Markov chain Mo...
research
01/15/2022

Self-Adaptive Binary-Addition-Tree Algorithm-Based Novel Monte Carlo Simulation for Binary-State Network Reliability Approximation

The Monte Carlo simulation (MCS) is a statistical methodology used in a ...
research
01/20/2022

The Accuracy vs. Sampling Overhead Trade-off in Quantum Error Mitigation Using Monte Carlo-Based Channel Inversion

Quantum error mitigation (QEM) is a class of promising techniques for re...
research
05/24/2019

Computational cost for determining an approximate global minimum using the selection and crossover algorithm

This work examines the expected computational cost to determine an appro...
research
06/26/2019

Monte Carlo Integration with adaptive variance selection for improved stochastic Efficient Global Optimization

In this paper, the minimization of computational cost on evaluating mult...
research
01/27/2020

Variance Reduction with Sparse Gradients

Variance reduction methods such as SVRG and SpiderBoost use a mixture of...

Please sign up or login with your details

Forgot password? Click here to reset