Latent reweighting, an almost free improvement for GANs

10/19/2021
by   Thibaut Issenhuth, et al.
38

Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting different classes of images. In particular, the generator will necessarily sample some low-quality images in between the classes. Rather than modifying the architecture, a line of works aims at improving the sampling quality from pre-trained generators at the expense of increased computational cost. Building on this, we introduce an additional network to predict latent importance weights and two associated sampling methods to avoid the poorest samples. This idea has several advantages: 1) it provides a way to inject disconnectedness into any GAN architecture, 2) since the rejection happens in the latent space, it avoids going through both the generator and the discriminator, saving computation time, 3) this importance weights formulation provides a principled way to reduce the Wasserstein's distance to the target distribution. We demonstrate the effectiveness of our method on several datasets, both synthetic and high-dimensional.

READ FULL TEXT

page 1

page 8

page 16

page 17

page 18

page 19

research
03/02/2022

Discriminating Against Unrealistic Interpolations in Generative Adversarial Networks

Interpolations in the latent space of deep generative models is one of t...
research
03/12/2020

Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling

We show that the sum of the implicit generator log-density log p_g of a ...
research
09/13/2021

Inferential Wasserstein Generative Adversarial Networks

Generative Adversarial Networks (GANs) have been impactful on many probl...
research
07/19/2023

Adversarial Likelihood Estimation with One-way Flows

Generative Adversarial Networks (GANs) can produce high-quality samples,...
research
04/12/2021

Diamond in the rough: Improving image realism by traversing the GAN latent space

In just a few years, the photo-realism of images synthesized by Generati...
research
12/23/2019

RPGAN: GANs Interpretability via Random Routing

In this paper, we introduce Random Path Generative Adversarial Network (...
research
06/08/2020

Learning disconnected manifolds: a no GANs land

Typical architectures of Generative AdversarialNetworks make use of a un...

Please sign up or login with your details

Forgot password? Click here to reset