Geometrical Insights for Implicit Generative Modeling

12/21/2017
by   Leon Bottou, et al.
0

Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion. A careful look at the geometries induced by these distances on the space of probability measures reveals interesting differences. In particular, we can establish surprising approximate global convergence guarantees for the 1-Wasserstein distance,even when the parametric generator has a nonconvex parametrization.

READ FULL TEXT
research
06/11/2019

Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance

Minimum expected distance estimation (MEDE) algorithms have been widely ...
research
02/01/2019

Generalized Sliced Wasserstein Distances

The Wasserstein distance and its variations, e.g., the sliced-Wasserstei...
research
09/16/2019

A Characteristic Function Approach to Deep Implicit Generative Modeling

In this paper, we formulate the problem of learning an Implicit Generati...
research
10/26/2021

On the Optimization Landscape of Maximum Mean Discrepancy

Generative models have been successfully used for generating realistic s...
research
10/30/2017

Implicit Manifold Learning on Generative Adversarial Networks

This paper raises an implicit manifold learning perspective in Generativ...
research
05/04/2022

Rate of convergence of the smoothed empirical Wasserstein distance

Consider an empirical measure ℙ_n induced by n iid samples from a d-dime...
research
02/05/2020

Fast and Robust Comparison of Probability Measures in Heterogeneous Spaces

The problem of comparing distributions endowed with their own geometry a...

Please sign up or login with your details

Forgot password? Click here to reset