Inferential Wasserstein Generative Adversarial Networks

09/13/2021
by   Yao Chen, et al.
30

Generative Adversarial Networks (GANs) have been impactful on many problems and applications but suffer from unstable training. The Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. The iWGAN model jointly learns an encoder network and a generator network motivated by the iterative primal dual optimization process. The encoder network maps the observed samples to the latent space and the generator network maps the samples from the latent space to the data space. We establish the generalization error bound of the iWGAN to theoretically justify its performance. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that the iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of the iWGAN by obtaining competitive and stable performances for benchmark datasets.

READ FULL TEXT

page 20

page 21

page 22

page 23

page 24

page 30

page 31

research
11/23/2017

IVE-GAN: Invariant Encoding Generative Adversarial Networks

Generative adversarial networks (GANs) are a powerful framework for gene...
research
10/07/2022

Adversarial network training using higher-order moments in a modified Wasserstein distance

Generative-adversarial networks (GANs) have been used to produce data cl...
research
02/06/2018

Training Generative Adversarial Networks via Primal-Dual Subgradient Methods: A Lagrangian Perspective on GAN

We relate the minimax game of generative adversarial networks (GANs) to ...
research
10/19/2021

Latent reweighting, an almost free improvement for GANs

Standard formulations of GANs, where a continuous function deforms a con...
research
11/19/2018

Bayesian CycleGAN via Marginalizing Latent Sampling

Recent techniques built on Generative Adversarial Networks (GANs) like C...
research
02/25/2019

Wasserstein GAN Can Perform PCA

Generative Adversarial Networks (GANs) have become a powerful framework ...
research
01/26/2019

Witnessing Adversarial Training in Reproducing Kernel Hilbert Spaces

Modern implicit generative models such as generative adversarial network...

Please sign up or login with your details

Forgot password? Click here to reset