DeepAI AI Chat
Log In Sign Up

Approximate sampling and estimation of partition functions using neural networks

09/21/2022
by   George T. Cantwell, et al.
University of Michigan
0

We consider the closely related problems of sampling from a distribution known up to a normalizing constant, and estimating said normalizing constant. We show how variational autoencoders (VAEs) can be applied to this task. In their standard applications, VAEs are trained to fit data drawn from an intractable distribution. We invert the logic and train the VAE to fit a simple and tractable distribution, on the assumption of a complex and intractable latent distribution, specified up to normalization. This procedure constructs approximations without the use of training data or Markov chain Monte Carlo sampling. We illustrate our method on three examples: the Ising model, graph clustering, and ranking.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/17/2021

Invertible Flow Non Equilibrium sampling

Simultaneously sampling from a complex distribution with intractable nor...
12/15/2021

IID Sampling from Doubly Intractable Distributions

Intractable posterior distributions of parameters with intractable norma...
10/28/2016

Improving Sampling from Generative Autoencoders with Markov Chains

We focus on generative autoencoders, such as variational or adversarial ...
09/10/2021

Diagnostics for Monte Carlo Algorithms for Models with Intractable Normalizing Functions

Models with intractable normalizing functions have numerous applications...
12/11/2019

Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection

Despite their successes, deep neural networks still make unreliable pred...
10/26/2018

Resampled Priors for Variational Autoencoders

We propose Learned Accept/Reject Sampling (LARS), a method for construct...
11/06/2021

Neural BRDFs: Representation and Operations

Bidirectional reflectance distribution functions (BRDFs) are pervasively...