Variational Inference for Policy Gradient

02/21/2018
by   Tianbing Xu, et al.
0

Inspired by the seminal work on Stein Variational Inference and Stein Variational Policy Gradient, we derived a method to generate samples from the posterior variational parameter distribution by explicitly minimizing the KL divergence to match the target distribution in an amortize fashion. Consequently, we applied this varational inference technique into vanilla policy gradient, TRPO and PPO with Bayesian Neural Network parameterizations for reinforcement learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

Policy Search by Target Distribution Learning for Continuous Control

We observe that several existing policy gradient methods (such as vanill...
research
11/19/2021

Policy Gradient Approach to Compilation of Variational Quantum Circuits

We propose a method for finding approximate compilations of quantum circ...
research
07/03/2023

Monte Carlo Policy Gradient Method for Binary Optimization

Binary optimization has a wide range of applications in combinatorial op...
research
05/18/2023

Deep Metric Tensor Regularized Policy Gradient

Policy gradient algorithms are an important family of deep reinforcement...
research
07/20/2017

Learning to Draw Samples with Amortized Stein Variational Gradient Descent

We propose a simple algorithm to train stochastic neural networks to dra...
research
08/12/2021

A functional mirror ascent view of policy gradient methods with function approximation

We use functional mirror ascent to propose a general framework (referred...
research
03/28/2018

Stochastic Variational Inference with Gradient Linearization

Variational inference has experienced a recent surge in popularity owing...

Please sign up or login with your details

Forgot password? Click here to reset