Variational Inference with Gaussian Score Matching

by   Chirag Modi, et al.

Variational inference (VI) is a method to approximate the computationally intractable posterior distributions that arise in Bayesian statistics. Typically, VI fits a simple parametric distribution to the target posterior by minimizing an appropriate objective such as the evidence lower bound (ELBO). In this work, we present a new approach to VI based on the principle of score matching, that if two distributions are equal then their score functions (i.e., gradients of the log density) are equal at every point on their support. With this, we develop score matching VI, an iterative algorithm that seeks to match the scores between the variational approximation and the exact posterior. At each iteration, score matching VI solves an inner optimization, one that minimally adjusts the current variational estimate to match the scores at a newly sampled value of the latent variables. We show that when the variational family is a Gaussian, this inner optimization enjoys a closed form solution, which we call Gaussian score matching VI (GSM-VI). GSM-VI is also a “black box” variational algorithm in that it only requires a differentiable joint distribution, and as such it can be applied to a wide class of models. We compare GSM-VI to black box variational inference (BBVI), which has similar requirements but instead optimizes the ELBO. We study how GSM-VI behaves as a function of the problem dimensionality, the condition number of the target covariance matrix (when the target is Gaussian), and the degree of mismatch between the approximating and exact posterior distribution. We also study GSM-VI on a collection of real-world Bayesian inference problems from the posteriorDB database of datasets and models. In all of our studies we find that GSM-VI is faster than BBVI, but without sacrificing accuracy. It requires 10-100x fewer gradient evaluations to obtain a comparable quality of approximation.


Quasi Black-Box Variational Inference with Natural Gradients for Bayesian Learning

We develop an optimization algorithm suitable for Bayesian learning in c...

Variational Nonlinear Kalman Filtering with Unknown Process Noise Covariance

Motivated by the maneuvering target tracking with sensors such as radar ...

Boosting Variational Inference

Variational inference (VI) provides fast approximations of a Bayesian po...

Exact Manifold Gaussian Variational Bayes

We propose an optimization algorithm for Variational Inference (VI) in c...

The Variational Gaussian Process

Variational inference is a powerful tool for approximate inference, and ...

Bi-level Score Matching for Learning Energy-based Latent Variable Models

Score matching (SM) provides a compelling approach to learn energy-based...

Variational Inference as Iterative Projection in a Bayesian Hilbert Space

Variational Bayesian inference is an important machine-learning tool tha...

Please sign up or login with your details

Forgot password? Click here to reset