Tensor Monte Carlo: particle methods for the GPU era

06/22/2018
by   Laurence Aitchison, et al.
0

Multi-sample objectives improve over single-sample estimates by giving tighter variational bounds and more accurate estimates of posterior uncertainty. However, these multi-sample techniques scale poorly, in the sense that the number of samples required to maintain the same quality of posterior approximation scales exponentially in the number of latent dimensions. One approach to addressing these issues is sequential Monte Carlo (SMC). However for many problems SMC is prohibitively slow because the resampling steps imposes an inherently sequential structure on the computation, which is difficult to effectively parallelise on GPU hardware. We developed tensor Monte-Carlo to address these issues. In particular, whereas the usual multi-sample objective draws K samples from a joint distribution over all latent variables, we draw K samples for each of the n individual latent variables, and form our bound by averaging over all K^n combinations of samples from each individual latent. While this sum over exponentially many terms might seem to be intractable, in many cases it can be efficiently computed by exploiting conditional independence structure. In particular, we generalise and simplify classical algorithms such as message passing by noting that these sums can be computed can be written in an extremely simple, general form: a series of tensor inner-products which can be depicted graphically as reductions of a factor graph. As such, we can straightforwardly combine summation over discrete variables with importance sampling over importance sampling over continuous variables.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2018

Pseudo-Marginal Hamiltonian Monte Carlo with Efficient Importance Sampling

The joint posterior of latent variables and parameters in Bayesian hiera...
research
02/26/2018

ABC Samplers

This Chapter, "ABC Samplers", is to appear in the forthcoming Handbook o...
research
05/18/2023

Massively Parallel Reweighted Wake-Sleep

Reweighted wake-sleep (RWS) is a machine learning method for performing ...
research
01/26/2022

Uphill Roads to Variational Tightness: Monotonicity and Monte Carlo Objectives

We revisit the theory of importance weighted variational inference (IWVI...
research
06/30/2021

Monte Carlo Variational Auto-Encoders

Variational auto-encoders (VAE) are popular deep latent variable models ...
research
10/18/2022

Inference in conditioned dynamics through causality restoration

Computing observables from conditioned dynamics is typically computation...
research
08/30/2021

A principled stopping rule for importance sampling

Importance sampling (IS) is a Monte Carlo technique that relies on weigh...

Please sign up or login with your details

Forgot password? Click here to reset