Training Wasserstein GANs without gradient penalties

10/27/2021
by   Dohyun Kwon, et al.
0

We propose a stable method to train Wasserstein generative adversarial networks. In order to enhance stability, we consider two objective functions using the c-transform based on Kantorovich duality which arises in the theory of optimal transport. We experimentally show that this algorithm can effectively enforce the Lipschitz constraint on the discriminator while other standard methods fail to do so. As a consequence, our method yields an accurate estimation for the optimal discriminator and also for the Wasserstein distance between the true distribution and the generated one. Our method requires no gradient penalties nor corresponding hyperparameter tuning and is computationally more efficient than other methods. At the same time, it yields competitive generators of synthetic images based on the MNIST, F-MNIST, and CIFAR-10 datasets.

READ FULL TEXT

page 5

page 6

page 7

research
10/15/2019

Discriminator optimal transport

Within a broad class of generative adversarial networks, we show that di...
research
03/31/2017

Improved Training of Wasserstein GANs

Generative Adversarial Networks (GANs) are powerful generative models, b...
research
10/09/2019

How Well Do WGANs Estimate the Wasserstein Metric?

Generative modelling is often cast as minimizing a similarity measure be...
research
07/01/2020

Sliced Iterative Generator

We introduce the Sliced Iterative Generator (SIG), an iterative generati...
research
06/02/2021

Partial Wasserstein Covering

We consider a general task called partial Wasserstein covering with the ...
research
06/15/2022

Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?

Wasserstein Generative Adversarial Networks (WGANs) are the popular gene...
research
06/21/2020

The Gaussian Transform

We introduce the Gaussian transform (GT), an optimal transport inspired ...

Please sign up or login with your details

Forgot password? Click here to reset