We study the computation of doubly regularized Wasserstein barycenters, ...
We study the convergence to local Nash equilibria of gradient methods fo...
In supervised learning, the regularization path is sometimes used as a
c...
We study a general formulation of regularized Wasserstein barycenters th...
This paper studies the infinite-width limit of deep linear neural networ...
We consider the idealized setting of gradient flow on the population ris...
We consider the problem of computing mixed Nash equilibria of two-player...
Trajectory inference aims at recovering the dynamics of a population fro...
To theoretically understand the behavior of trained deep neural networks...
Many supervised machine learning methods are naturally cast as optimizat...
The squared Wasserstein distance is a natural quantity to compare probab...
The idea of slicing divergences has been proven to be successful when
co...
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) ...
Minimizing a convex function of a measure with a sparsity-inducing penal...
With huge data acquisition progresses realized in the past decades and
a...
In a series of recent theoretical works, it has been shown that strongly...
Optimal transport (OT) and maximum mean discrepancies (MMD) are now rout...
Many tasks in machine learning and signal processing can be solved by
mi...
This article introduces a new notion of optimal transport (OT) between t...