Know Your Limits: Monotonicity Softmax Make Neural Classifiers Overconfident on OOD Data

12/09/2020
by   Dennis Ulmer, et al.
13

A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform. Previous work has attempted to tackle this problem using uncertainty estimation techniques. However, there is empirical evidence that a large family of these techniques do not detect OOD reliably in classification tasks. This paper puts forward a theoretical explanation for said experimental findings. We prove that such techniques are not able to reliably identify OOD samples in a classification setting, provided the models satisfy weak assumptions about the monotonicity of feature values and resulting class probabilities. This result stems from the interplay between the saturating nature of activation functions like sigmoid or softmax, coupled with the most widely-used uncertainty metrics.

READ FULL TEXT
research
06/19/2023

Learn to Accumulate Evidence from All Training Samples: Theory and Practice

Evidential deep learning, built upon belief theory and subjective logic,...
research
07/10/2020

Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks

Accurate estimation of predictive uncertainty in modern neural networks ...
research
04/01/2022

Autoencoder Attractors for Uncertainty Estimation

The reliability assessment of a machine learning model's prediction is a...
research
02/13/2023

Density-Softmax: Scalable and Distance-Aware Uncertainty Estimation under Distribution Shifts

Prevalent deep learning models suffer from significant over-confidence u...
research
02/23/2021

Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

We show that a single softmax neural net with minimal changes can beat t...
research
06/09/2020

A t-distribution based operator for enhancing out of distribution robustness of neural network classifiers

Neural Network (NN) classifiers can assign extreme probabilities to samp...
research
10/19/2020

Stationary Activations for Uncertainty Calibration in Deep Learning

We introduce a new family of non-linear neural network activation functi...

Please sign up or login with your details

Forgot password? Click here to reset