DeepAI AI Chat
Log In Sign Up

The Cramer Distance as a Solution to Biased Wasserstein Gradients

05/30/2017
by   Marc G. Bellemare, et al.
Google
0

The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling. In this paper we describe three natural properties of probability divergences that reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting that this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance. We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. To illustrate the relevance of the Cramér distance in practice we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it performs significantly better than the related Wasserstein GAN.

READ FULL TEXT

page 7

page 19

page 20

04/18/2019

From GAN to WGAN

This paper explains the math behind a generative adversarial network (GA...
10/12/2020

Permutation invariant networks to learn Wasserstein metrics

Understanding the space of probability measures on a metric space equipp...
03/12/2020

Statistical and Topological Properties of Sliced Probability Divergences

The idea of slicing divergences has been proven to be successful when co...
07/19/2023

Revisiting invariances and introducing priors in Gromov-Wasserstein distances

Gromov-Wasserstein distance has found many applications in machine learn...
12/24/2019

Barycenters of Natural Images – Constrained Wasserstein Barycenters for Image Morphing

Image interpolation, or image morphing, refers to a visual transition be...
10/07/2022

Adversarial network training using higher-order moments in a modified Wasserstein distance

Generative-adversarial networks (GANs) have been used to produce data cl...
08/13/2018

The Gromov-Wasserstein distance between networks and stable network invariants

We define a metric---the Network Gromov-Wasserstein distance---on weight...

Code Repositories

cramer-gan

Tensorflow Implementation on "The Cramer Distance as a Solution to Biased Wasserstein Gradients" (https://arxiv.org/pdf/1705.10743.pdf)


view repo