Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection

10/18/2021
by   Koby Bibas, et al.
0

Detecting out-of-distribution (OOD) samples is vital for developing machine learning based models for critical safety systems. Common approaches for OOD detection assume access to some OOD samples during training which may not be available in a real-life scenario. Instead, we utilize the predictive normalized maximum likelihood (pNML) learner, in which no assumptions are made on the tested input. We derive an explicit expression of the pNML and its generalization error, denoted as the regret, for a single layer neural network (NN). We show that this learner generalizes well when (i) the test vector resides in a subspace spanned by the eigenvectors associated with the large eigenvalues of the empirical correlation matrix of the training data, or (ii) the test sample is far from the decision boundary. Furthermore, we describe how to efficiently apply the derived pNML regret to any pretrained deep NN, by employing the explicit pNML for the last layer, followed by the softmax function. Applying the derived regret to deep NN requires neither additional tunable parameters nor extra data. We extensively evaluate our approach on 74 OOD detection benchmarks using DenseNet-100, ResNet-34, and WideResNet-40 models trained with CIFAR-100, CIFAR-10, SVHN, and ImageNet-30 showing a significant improvement of up to 15.6% over recent leading methods.

READ FULL TEXT
research
02/14/2021

The Predictive Normalized Maximum Likelihood for Over-parameterized Linear Regression with Norm Constraint: Regret and Double Descent

A fundamental tenet of learning theory is that a trade-off exists betwee...
research
04/28/2019

Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks

The Predictive Normalized Maximum Likelihood (pNML) scheme has been rece...
research
06/17/2022

Beyond Ridge Regression for Distribution-Free Data

In supervised batch learning, the predictive normalized maximum likeliho...
research
08/16/2020

GLOD: Gaussian Likelihood Out of Distribution Detector

Discriminative deep neural networks (DNNs) do well at classifying input ...
research
05/12/2019

A New Look at an Old Problem: A Universal Learning Approach to Linear Regression

Linear regression is a classical paradigm in statistics. A new look at i...
research
01/26/2019

Money on the Table: Statistical information ignored by Softmax can improve classifier accuracy

Softmax is a standard final layer used in Neural Nets (NNs) to summarize...
research
02/16/2022

The learning phases in NN: From Fitting the Majority to Fitting a Few

The learning dynamics of deep neural networks are subject to controversy...

Please sign up or login with your details

Forgot password? Click here to reset