Neural Implicit Manifold Learning for Topology-Aware Generative Modelling

06/22/2022
by   Brendan Leigh Ross, et al.
0

Natural data observed in ℝ^n is often constrained to an m-dimensional manifold ℳ, where m < n. Current generative models represent this manifold by mapping an m-dimensional latent variable through a neural network f_θ: ℝ^m →ℝ^n. Such procedures, which we call pushforward models, incur a straightforward limitation: manifolds cannot in general be represented with a single parameterization, meaning that attempts to do so will incur either computational instability or the inability to learn probability densities within the manifold. To remedy this problem, we propose to model ℳ as a neural implicit manifold: the set of zeros of a neural network. To learn the data distribution within ℳ, we introduce constrained energy-based models, which use a constrained variant of Langevin dynamics to train and sample within the learned manifold. The resulting model can be manipulated with an arithmetic of manifolds which allows practitioners to take unions and intersections of model manifolds. In experiments on synthetic and natural data, we show that constrained EBMs can learn manifold-supported distributions with complex topologies more accurately than pushforward models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset