MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations

02/02/2023
by   Mingxuan Yi, et al.
0

The conventional understanding of adversarial training in generative adversarial networks (GANs) is that the discriminator is trained to estimate a divergence, and the generator learns to minimize this divergence. We argue that despite the fact that many variants of GANs were developed following this paradigm, the current theoretical understanding of GANs and their practical algorithms are inconsistent. In this paper, we leverage Wasserstein gradient flows which characterize the evolution of particles in the sample space, to gain theoretical insights and algorithmic inspiration of GANs. We introduce a unified generative modeling framework - MonoFlow: the particle evolution is rescaled via a monotonically increasing mapping of the log density ratio. Under our framework, adversarial training can be viewed as a procedure first obtaining MonoFlow's vector field via training the discriminator and the generator learns to draw the particle flow defined by the corresponding vector field. We also reveal the fundamental difference between variational divergence minimization and adversarial training. This analysis helps us to identify what types of generator loss functions can lead to the successful training of GANs and suggest that GANs may have more loss designs beyond the literature (e.g., non-saturated loss), as long as they realize MonoFlow. Consistent empirical studies are included to validate the effectiveness of our framework.

READ FULL TEXT

page 2

page 8

research
04/05/2020

Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling by Exploring Energy of the Discriminator

Generative Adversarial Networks (GANs) have shown great promise in model...
research
06/02/2023

GANs Settle Scores!

Generative adversarial networks (GANs) comprise a generator, trained to ...
research
06/03/2020

Approximation and convergence of GANs training: an SDE approach

Generative adversarial networks (GANs) have enjoyed tremendous empirical...
research
05/25/2023

Unifying GANs and Score-Based Diffusion as Generative Particle Models

Particle-based deep generative models, such as gradient flows and score-...
research
11/07/2017

On the Discrimination-Generalization Tradeoff in GANs

Generative adversarial training can be generally understood as minimizin...
research
09/06/2018

GANs beyond divergence minimization

Generative adversarial networks (GANs) can be interpreted as an adversar...
research
02/02/2022

Structure-preserving GANs

Generative adversarial networks (GANs), a class of distribution-learning...

Please sign up or login with your details

Forgot password? Click here to reset