DeepAI AI Chat
Log In Sign Up

(f,Γ)-Divergences: Interpolating between f-Divergences and Integral Probability Metrics

by   Jeremiah Birrell, et al.

We develop a general framework for constructing new information-theoretic divergences that rigorously interpolate between f-divergences and integral probability metrics (IPMs), such as the Wasserstein distance. These new divergences inherit features from IPMs, such as the ability to compare distributions which are not absolute continuous, as well as from f-divergences, for instance the strict concavity of their variational representations and the ability to compare heavy-tailed distributions. When combined, these features establish a divergence with improved convergence and estimation properties for statistical learning applications. We demonstrate their use in the training of generative adversarial networks (GAN) for heavy-tailed data and also show they can provide improved performance over gradient-penalized Wasserstein GAN in image generation.


Cumulant GAN

Despite the continuous improvements of Generative Adversarial Networks (...

Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions

Generative adversarial networks (GANs) are often billed as "universal di...

McGan: Mean and Covariance Feature Matching GAN

We introduce new families of Integral Probability Metrics (IPM) for trai...

Function-space regularized Rényi divergences

We propose a new family of regularized Rényi divergences parametrized no...

On the convergence properties of GAN training

Recent work has shown local convergence of GAN training for absolutely c...

Max-Sliced Wasserstein Distance and its use for GANs

Generative adversarial nets (GANs) and variational auto-encoders have si...