AdaGAN: Boosting Generative Models

by   Ilya Tolstikhin, et al.

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.


Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a popular formulation to trai...

GAN You Do the GAN GAN?

Generative Adversarial Networks (GANs) have become a dominant class of g...

Mixture Density Generative Adversarial Networks

Generative Adversarial Networks have surprising ability for generating s...

Composite Functional Gradient Learning of Generative Adversarial Models

Generative adversarial networks (GAN) have become popular for generating...

BourGAN: Generative Networks with Metric Embeddings

This paper addresses the mode collapse for generative adversarial networ...

Mode Regularized Generative Adversarial Networks

Although Generative Adversarial Networks achieve state-of-the-art result...

Anyone GAN Sing

The problem of audio synthesis has been increasingly solved using deep n...

Please sign up or login with your details

Forgot password? Click here to reset