Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning

01/30/2019
by   Casey Chu, et al.
0

The goal of this paper is to provide a unifying view of a wide range of problems of interest in machine learning by framing them as the minimization of functionals defined on the space of probability measures. In particular, we show that generative adversarial networks, variational inference, and actor-critic methods in reinforcement learning can all be seen through the lens of our framework. We then discuss a generic optimization algorithm for our formulation, called probability functional descent (PFD), and show how this algorithm recovers existing methods developed independently in the settings mentioned earlier.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/03/2018

VIREL: A Variational Inference Framework for Reinforcement Learning

Applying probabilistic models to reinforcement learning (RL) has become ...
04/04/2020

The equivalence between Stein variational gradient descent and black-box variational inference

We formalize an equivalence between two popular methods for Bayesian inf...
10/06/2016

Connecting Generative Adversarial Networks and Actor-Critic Methods

Both generative adversarial networks (GAN) in unsupervised learning and ...
08/16/2022

Langevin Diffusion Variational Inference

Many methods that build powerful variational distributions based on unad...
02/01/2022

Tutorial on amortized optimization for learning to optimize over continuous domains

Optimization is a ubiquitous modeling tool that is often deployed in set...
04/20/2021

Outcome-Driven Reinforcement Learning via Variational Inference

While reinforcement learning algorithms provide automated acquisition of...
06/06/2018

Spectral Inference Networks: Unifying Spectral Methods With Deep Learning

We present Spectral Inference Networks, a framework for learning eigenfu...

Please sign up or login with your details

Forgot password? Click here to reset