Approximation for Probability Distributions by Wasserstein GAN

03/18/2021
by   Yihang Gao, et al.
0

In this paper, we show that the approximation for distributions by Wasserstein GAN depends on both the width/depth (capacity) of generators and discriminators, as well as the number of samples in training. A quantified generalization bound is developed for Wasserstein distance between the generated distribution and the target distribution. It implies that with sufficient training samples, for generators and discriminators with proper number of width and depth, the learned Wasserstein GAN can approximate distributions well. We discover that discriminators suffer a lot from the curse of dimensionality, meaning that GANs have higher requirement for the capacity of discriminators than generators, which is consistent with the theory in arXiv:1703.00573v5 [cs.LG]. More importantly, overly deep (high capacity) generators may cause worse results (after training) than low capacity generators if discriminators are not strong enough. Different from Wasserstein GAN in arXiv:1701.07875v3 [stat.ML], we adopt GroupSort neural networks arXiv:1811.05381v2 [cs.LG] in the model for their better approximation to 1-Lipschitz functions. Compared to some existing generalization (convergence) analysis of GANs, we expect our work are more applicable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2022

Learning Distributions by Generative Adversarial Networks: Approximation and Generalization

We study how well generative adversarial networks (GAN) learn probabilit...
research
01/29/2021

On the capacity of deep generative networks for approximating distributions

We study the efficacy and efficiency of deep generative networks for app...
research
07/13/2020

Lessons Learned from the Training of GANs on Artificial Datasets

Generative Adversarial Networks (GANs) have made great progress in synth...
research
06/27/2018

Approximability of Discriminators Implies Diversity in GANs

While Generative Adversarial Networks (GANs) have empirically produced i...
research
10/24/2021

Non-Asymptotic Error Bounds for Bidirectional GANs

We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimati...
research
01/18/2022

Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

Arguably the most fundamental question in the theory of generative adver...
research
03/05/2018

Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks

We propose an approach to address two issues that commonly occur during ...

Please sign up or login with your details

Forgot password? Click here to reset