DeepAI AI Chat
Log In Sign Up

Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

by   Jishnu Mukhoti, et al.

We show that a single softmax neural net with minimal changes can beat the uncertainty predictions of Deep Ensembles and other more complex single-forward-pass uncertainty approaches. Softmax neural nets cannot capture epistemic uncertainty reliably because for OoD points they extrapolate arbitrarily and suffer from feature collapse. This results in arbitrary softmax entropies for OoD points which can have high entropy, low, or anything in between. We study why, and show that with the right inductive biases, softmax neural nets trained with maximum likelihood reliably capture epistemic uncertainty through the feature-space density. This density is obtained using Gaussian Discriminant Analysis, but it cannot disentangle uncertainties. We show that it is necessary to combine this density with the softmax entropy to disentangle aleatoric and epistemic uncertainty – crucial e.g. for active learning. We examine the quality of epistemic uncertainty on active learning and OoD detection, where we obtain SOTA  0.98 AUROC on CIFAR-10 vs SVHN.


Epistemic Uncertainty Sampling

Various strategies for active learning have been proposed in the machine...

Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty Modeling

In this work, we demonstrate how to reliably estimate epistemic uncertai...

Understanding Softmax Confidence and Uncertainty

It is often remarked that neural networks fail to increase their uncerta...

Density-Softmax: Scalable and Distance-Aware Uncertainty Estimation under Distribution Shifts

Prevalent deep learning models suffer from significant over-confidence u...

Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network

We propose a method for training a deterministic deep model that can fin...

Credal nets under epistemic irrelevance

We present a new approach to credal nets, which are graphical models tha...

Know Your Limits: Monotonicity Softmax Make Neural Classifiers Overconfident on OOD Data

A crucial requirement for reliable deployment of deep learning models fo...

Code Repositories


Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

view repo


Dirty-MNIST dataset introduced in "Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty" (

view repo