DeepAI AI Chat
Log In Sign Up

Taming Hyperparameter Tuning in Continuous Normalizing Flows Using the JKO Scheme

by   Alexander Vidal, et al.

A normalizing flow (NF) is a mapping that transforms a chosen probability distribution to a normal distribution. Such flows are a common technique used for data generation and density estimation in machine learning and data science. The density estimate obtained with a NF requires a change of variables formula that involves the computation of the Jacobian determinant of the NF transformation. In order to tractably compute this determinant, continuous normalizing flows (CNF) estimate the mapping and its Jacobian determinant using a neural ODE. Optimal transport (OT) theory has been successfully used to assist in finding CNFs by formulating them as OT problems with a soft penalty for enforcing the standard normal distribution as a target measure. A drawback of OT-based CNFs is the addition of a hyperparameter, α, that controls the strength of the soft penalty and requires significant tuning. We present JKO-Flow, an algorithm to solve OT-based CNF without the need of tuning α. This is achieved by integrating the OT CNF framework into a Wasserstein gradient flow framework, also known as the JKO scheme. Instead of tuning α, we repeatedly solve the optimization problem for a fixed α effectively performing a JKO update with a time-step α. Hence we obtain a "divide and conquer" algorithm by repeatedly solving simpler problems instead of solving a potentially harder problem with large α.


page 6

page 15

page 16

page 17

page 18

page 19

page 20

page 21


OT-Flow: Fast and Accurate Continuous Normalizing Flows via Optimal Transport

A normalizing flow is an invertible mapping between an arbitrary probabi...

The back-and-forth method for Wasserstein gradient flows

We present a method to efficiently compute Wasserstein gradient flows. O...

SyMOT-Flow: Learning optimal transport flow for two arbitrary distributions with maximum mean discrepancy

Finding a transformation between two unknown probability distributions f...

Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization

Flow-based models are powerful tools for designing probabilistic models ...

Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions

By building up on the recent theory that established the connection betw...

Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation

Optimal Mass Transport (OMT) is a well studied problem with a variety of...

TO-FLOW: Efficient Continuous Normalizing Flows with Temporal Optimization adjoint with Moving Speed

Continuous normalizing flows (CNFs) construct invertible mappings betwee...