Statistical Guarantees of Generative Adversarial Networks for Distribution Estimation

02/10/2020
by   Minshuo Chen, et al.
45

Generative Adversarial Networks (GANs) have achieved great success in unsupervised learning. Despite the remarkable empirical performance, there are limited theoretical understandings on the statistical properties of GANs. This paper provides statistical guarantees of GANs for the estimation of data distributions which have densities in a Hölder space. Our main result shows that, if the generator and discriminator network architectures are properly chosen (universally for all distributions with Hölder densities), GANs are consistent estimators of the data distributions under strong discrepancy metrics, such as the Wasserstein distance. To our best knowledge, this is the first statistical theory of GANs for Hölder densities. In comparison with existing works, our theory requires minimum assumptions on data distributions. Our generator and discriminator networks utilize general weight matrices and the non-invertible ReLU activation function, while many existing works only apply to invertible weight matrices and invertible activation functions. In our analysis, we decompose the error into a statistical error and an approximation error by a new oracle inequality, which may be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

An error analysis of generative adversarial networks for learning distributions

This paper studies how well generative adversarial networks (GANs) learn...
research
03/09/2020

When can Wasserstein GANs minimize Wasserstein Distance?

Generative Adversarial Networks (GANs) are widely used models to learn c...
research
04/21/2021

Statistical inference for generative adversarial networks

This paper studies generative adversarial networks (GANs) from a statist...
research
12/01/2020

Convergence and Sample Complexity of SGD in GANs

We provide theoretical convergence guarantees on training Generative Adv...
research
10/08/2021

Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency

The introduction of Variational Autoencoders (VAE) has been marked as a ...
research
11/24/2020

A Convenient Infinite Dimensional Framework for Generative Adversarial Learning

In recent years, generative adversarial networks (GANs) have demonstrate...
research
08/29/2019

Spectral Regularization for Combating Mode Collapse in GANs

Despite excellent progress in recent years, mode collapse remains a majo...

Please sign up or login with your details

Forgot password? Click here to reset