DeepAI AI Chat
Log In Sign Up

Approximate sampling and estimation of partition functions using neural networks

by   George T. Cantwell, et al.
University of Michigan

We consider the closely related problems of sampling from a distribution known up to a normalizing constant, and estimating said normalizing constant. We show how variational autoencoders (VAEs) can be applied to this task. In their standard applications, VAEs are trained to fit data drawn from an intractable distribution. We invert the logic and train the VAE to fit a simple and tractable distribution, on the assumption of a complex and intractable latent distribution, specified up to normalization. This procedure constructs approximations without the use of training data or Markov chain Monte Carlo sampling. We illustrate our method on three examples: the Ising model, graph clustering, and ranking.


page 1

page 2

page 3

page 4


Invertible Flow Non Equilibrium sampling

Simultaneously sampling from a complex distribution with intractable nor...

IID Sampling from Doubly Intractable Distributions

Intractable posterior distributions of parameters with intractable norma...

Improving Sampling from Generative Autoencoders with Markov Chains

We focus on generative autoencoders, such as variational or adversarial ...

Diagnostics for Monte Carlo Algorithms for Models with Intractable Normalizing Functions

Models with intractable normalizing functions have numerous applications...

Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection

Despite their successes, deep neural networks still make unreliable pred...

Resampled Priors for Variational Autoencoders

We propose Learned Accept/Reject Sampling (LARS), a method for construct...

Neural BRDFs: Representation and Operations

Bidirectional reflectance distribution functions (BRDFs) are pervasively...