DeepAI AI Chat
Log In Sign Up

The Cramer Distance as a Solution to Biased Wasserstein Gradients

by   Marc G. Bellemare, et al.

The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling. In this paper we describe three natural properties of probability divergences that reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting that this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance. We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. To illustrate the relevance of the Cramér distance in practice we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it performs significantly better than the related Wasserstein GAN.


page 7

page 19

page 20


From GAN to WGAN

This paper explains the math behind a generative adversarial network (GA...

Permutation invariant networks to learn Wasserstein metrics

Understanding the space of probability measures on a metric space equipp...

Statistical and Topological Properties of Sliced Probability Divergences

The idea of slicing divergences has been proven to be successful when co...

Revisiting invariances and introducing priors in Gromov-Wasserstein distances

Gromov-Wasserstein distance has found many applications in machine learn...

Barycenters of Natural Images – Constrained Wasserstein Barycenters for Image Morphing

Image interpolation, or image morphing, refers to a visual transition be...

Adversarial network training using higher-order moments in a modified Wasserstein distance

Generative-adversarial networks (GANs) have been used to produce data cl...

The Gromov-Wasserstein distance between networks and stable network invariants

We define a metric---the Network Gromov-Wasserstein distance---on weight...

Code Repositories


Tensorflow Implementation on "The Cramer Distance as a Solution to Biased Wasserstein Gradients" (

view repo