Flexible Prior Distributions for Deep Generative Models

10/31/2017
by   Yannic Kilcher, et al.
0

We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we argue that it might be advantageous to use more flexible code distributions. We demonstrate how these distributions can be induced directly from the data. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.

READ FULL TEXT

page 5

page 6

page 8

page 11

research
07/28/2017

Generator Reversal

We consider the problem of training generative models with deep neural n...
research
02/09/2021

Generative Models as Distributions of Functions

Generative models are typically trained on grid-like data such as images...
research
01/03/2021

StarNet: Gradient-free Training of Deep Generative Models using Determined System of Linear Equations

In this paper we present an approach for training deep generative models...
research
05/28/2023

Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling

Learning to denoise has emerged as a prominent paradigm to design state-...
research
09/20/2020

Factorized Deep Generative Models for Trajectory Generation with Spatiotemporal-Validity Constraints

Trajectory data generation is an important domain that characterizes the...
research
02/17/2018

Interpretable VAEs for nonlinear group factor analysis

Deep generative models have recently yielded encouraging results in prod...
research
11/27/2014

On the Expressive Efficiency of Sum Product Networks

Sum Product Networks (SPNs) are a recently developed class of deep gener...

Please sign up or login with your details

Forgot password? Click here to reset