DeepAI AI Chat
Log In Sign Up

Monte Carlo Inference via Greedy Importance Sampling

by   Dale Schuurmans, et al.

We present a new method for conducting Monte Carlo inference in graphical models which combines explicit search with generalized importance sampling. The idea is to reduce the variance of importance sampling by searching for significant points in the target distribution. We prove that it is possible to introduce search and still maintain unbiasedness. We then demonstrate our procedure on a few simple inference tasks and show that it can improve the inference quality of standard MCMC methods, including Gibbs sampling, Metropolis sampling, and Hybrid Monte Carlo. This paper extends previous work which showed how greedy importance sampling could be correctly realized in the one-dimensional case.


page 1

page 2

page 3

page 4

page 5

page 7

page 8

page 9


AND/OR Importance Sampling

The paper introduces AND/OR importance sampling for probabilistic graphi...

A weighted Discrepancy Bound of quasi-Monte Carlo Importance Sampling

Importance sampling Monte-Carlo methods are widely used for the approxim...

Exhaustive Neural Importance Sampling applied to Monte Carlo event generation

The generation of accurate neutrino-nucleus cross-section models needed ...

Quantifying rare events in spotting: How far do wildfires spread?

Spotting refers to the transport of burning pieces of firebrand by wind ...

Monte-Carlo acceleration: importance sampling and hybrid dynamic systems

The reliability of a complex industrial system can rarely be assessed an...

Monte Carlo Variational Auto-Encoders

Variational auto-encoders (VAE) are popular deep latent variable models ...

A Quadrature Rule combining Control Variates and Adaptive Importance Sampling

Driven by several successful applications such as in stochastic gradient...