DeepAI AI Chat
Log In Sign Up

Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections

by   Kimia Nadjahi, et al.

The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits. Since it is defined as an expectation over random projections, SW is commonly approximated by Monte Carlo. We adopt a new perspective to approximate SW by making use of the concentration of measure phenomenon: under mild assumptions, one-dimensional projections of a high-dimensional random vector are approximately Gaussian. Based on this observation, we develop a simple deterministic approximation for SW. Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation. We derive nonasymptotical guarantees for our approach, and show that the approximation error goes to zero as the dimension increases, under a weak dependence condition on the data distribution. We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem.


Central limit theorem for the Sliced 1-Wasserstein distance and the max-Sliced 1-Wasserstein distance

The Wasserstein distance has been an attractive tool in many fields. But...

Augmented Sliced Wasserstein Distances

While theoretically appealing, the application of the Wasserstein distan...

Markovian Sliced Wasserstein Distances: Beyond Independent Projections

Sliced Wasserstein (SW) distance suffers from redundant projections due ...

Estimation of high dimensional Gamma convolutions through random projections

Multivariate generalized Gamma convolutions are distributions defined by...

Control Variate Sliced Wasserstein Estimators

The sliced Wasserstein (SW) distances between two probability measures a...

k-Sliced Mutual Information: A Quantitative Study of Scalability with Dimension

Sliced mutual information (SMI) is defined as an average of mutual infor...

Influence of sampling on the convergence rates of greedy algorithms for parameter-dependent random variables

The main focus of this article is to provide a mathematical study of the...