Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks

02/24/2020
by   Agustinus Kristiadi, et al.
0

The point estimates of ReLU classification networks—arguably the most widely used neural network architecture—have been shown to yield arbitrarily high confidence far away from the training data. This architecture, in conjunction with a maximum a posteriori estimation scheme, is thus not calibrated nor robust. Approximate Bayesian inference has been empirically demonstrated to improve predictive uncertainty in neural networks, although the theoretical analysis of such Bayesian approximations is limited. We theoretically analyze approximate Gaussian posterior distributions on the weights of ReLU networks and show that they fix the overconfidence problem. Furthermore, we show that even a simplistic, thus cheap, Bayesian approximation, also fixes these issues. This indicates that a sufficient condition for a calibrated uncertainty on a ReLU network is “to be a bit Bayesian”. These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off. We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.

READ FULL TEXT

page 2

page 6

research
10/06/2020

Fixing Asymptotic Uncertainty of Bayesian Neural Networks with Infinite ReLU Features

Approximate Bayesian methods can mitigate overconfidence in ReLU network...
research
09/02/2019

Pathologies of Factorised Gaussian and MC Dropout Posteriors in Bayesian Neural Networks

Neural networks provide state-of-the-art performance on a variety of tas...
research
05/10/2021

Deep Neural Networks as Point Estimates for Deep Gaussian Processes

Deep Gaussian processes (DGPs) have struggled for relevance in applicati...
research
05/01/2019

LS-SVR as a Bayesian RBF network

We show the theoretical equivalence between the Least Squares Support Ve...
research
09/09/2019

Optimal Function Approximation with Relu Neural Networks

We consider in this paper the optimal approximations of convex univariat...
research
12/13/2018

Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

Classifiers used in the wild, in particular for safety-critical systems,...
research
06/10/2018

Building Bayesian Neural Networks with Blocks: On Structure, Interpretability and Uncertainty

We provide simple schemes to build Bayesian Neural Networks (BNNs), bloc...

Please sign up or login with your details

Forgot password? Click here to reset