Alpha-Divergences in Variational Dropout

11/12/2017
by   Bogdan Mazoure, et al.
0

We investigate the use of alternative divergences to Kullback-Leibler (KL) in variational inference(VI), based on the Variational Dropout kingma2015. Stochastic gradient variational Bayes (SGVB) aevb is a general framework for estimating the evidence lower bound (ELBO) in Variational Bayes. In this work, we extend the SGVB estimator with using Alpha-Divergences, which are alternative to divergences to VI' KL objective. The Gaussian dropout can be seen as a local reparametrization trick of the SGVB objective. We extend the Variational Dropout to use alpha divergences for variational inference. Our results compare α-divergence variational dropout with standard variational dropout with correlated and uncorrelated weight noise. We show that the α-divergence with α→ 1 (or KL divergence) is still a good measure for use in variational inference, in spite of the efficient use of Alpha-divergences for Dropout VI Li17. α→ 1 can yield the lowest training error, and optimizes a good lower bound for the evidence lower bound (ELBO) among all values of the parameter α∈ [0,∞).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2018

Deep Probabilistic Ensembles: Approximate Variational Inference through KL Regularization

In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scala...
research
06/28/2019

The Thermodynamic Variational Objective

We introduce the thermodynamic variational objective (TVO) for learning ...
research
03/08/2017

Dropout Inference in Bayesian Neural Networks with Alpha-divergences

To obtain uncertainty estimates with real-world Bayesian deep learning m...
research
12/24/2020

On Batch Normalisation for Approximate Bayesian Inference

We study batch normalisation in the context of variational inference met...
research
05/23/2018

Pushing the bounds of dropout

We show that dropout training is best understood as performing MAP estim...
research
07/10/2023

Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference

We provide a rigorous analysis of training by variational inference (VI)...
research
07/01/2020

All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference

The recently proposed Thermodynamic Variational Objective (TVO) leverage...

Please sign up or login with your details

Forgot password? Click here to reset