Overpruning in Variational Bayesian Neural Networks

01/18/2018
by   Brian Trippe, et al.
0

The motivations for using variational inference (VI) in neural networks differ significantly from those in latent variable models. This has a counter-intuitive consequence; more expressive variational approximations can provide significantly worse predictions as compared to those with less expressive families. In this work we make two contributions. First, we identify a cause of this performance gap, variational over-pruning. Second, we introduce a theoretically grounded explanation for this phenomenon. Our perspective sheds light on several related published results and provides intuition into the design of effective variational approximations of neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset