On the Effectiveness of Least Squares Generative Adversarial Networks

12/18/2017
by   Xudong Mao, et al.
0

Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ^2 divergence. We also present a theoretical analysis about the properties of LSGANs and χ^2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. For evaluating the image quality, we train LSGANs on several datasets including LSUN and a cat dataset, and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. We conduct three experiments, including Gaussian mixture distribution, difficult architectures, and a new proposed method --- datasets with small variance, to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs with gradient penalty succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

page 11

page 12

research
11/13/2016

Least Squares Generative Adversarial Networks

Unsupervised learning with generative adversarial networks (GANs) has pr...
research
06/23/2023

Penalty Gradient Normalization for Generative Adversarial Networks

In this paper, we propose a novel normalization method called penalty gr...
research
04/08/2022

Generative Adversarial Method Based on Neural Tangent Kernels

The recent development of Generative adversarial networks (GANs) has dri...
research
11/08/2019

Quality Aware Generative Adversarial Networks

Generative Adversarial Networks (GANs) have become a very popular tool f...
research
10/15/2019

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs

We generalize the concept of maximum-margin classifiers (MMCs) to arbitr...
research
09/05/2017

Linking Generative Adversarial Learning and Binary Classification

In this note, we point out a basic link between generative adversarial (...

Please sign up or login with your details

Forgot password? Click here to reset