Sliced-Wasserstein Gradient Flows

10/21/2021
by   Clément Bonet, et al.
11

Minimizing functionals in the space of probability distributions can be done with Wasserstein gradient flows. To solve them numerically, a possible approach is to rely on the Jordan-Kinderlehrer-Otto (JKO) scheme which is analogous to the proximal scheme in Euclidean spaces. However, this bilevel optimization problem is known for its computational challenges, especially in high dimension. To alleviate it, very recent works propose to approximate the JKO scheme leveraging Brenier's theorem, and using gradients of Input Convex Neural Networks to parameterize the density (JKO-ICNN). However, this method comes with a high computational cost and stability issues. Instead, this work proposes to use gradient flows in the space of probability measures endowed with the sliced-Wasserstein (SW) distance. We argue that this method is more flexible than JKO-ICNN, since SW enjoys a closed-form differentiable approximation. Thus, the density at each step can be parameterized by any generative model which alleviates the computational burden and makes it tractable in higher dimensions. Interestingly, we also show empirically that these gradient flows are strongly related to the usual Wasserstein gradient flows, and that they can be used to minimize efficiently diverse machine learning functionals.

READ FULL TEXT

page 8

page 20

page 21

page 23

page 24

page 25

page 26

page 27

research
06/01/2021

Large-Scale Wasserstein Gradient Flows

Wasserstein gradient flows provide a powerful means of understanding and...
research
10/24/2020

Gradient Flows in Dataset Space

The current practice in machine learning is traditionally model-centric,...
research
01/11/2023

Wasserstein Gradient Flows of the Discrepancy with Distance Kernel on the Line

This paper provides results on Wasserstein gradient flows between measur...
research
02/06/2020

Normalizing Flows on Tori and Spheres

Normalizing flows are a powerful tool for building expressive distributi...
research
10/21/2019

Kernelized Wasserstein Natural Gradient

Many machine learning problems can be expressed as the optimization of s...
research
06/01/2021

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

Gradient flows are a powerful tool for optimizing functionals in general...
research
08/31/2022

A DeepParticle method for learning and generating aggregation patterns in multi-dimensional Keller-Segel chemotaxis systems

We study a regularized interacting particle method for computing aggrega...

Please sign up or login with your details

Forgot password? Click here to reset