An Effective Baseline for Robustness to Distributional Shift

05/15/2021
by   Sunil Thulasidasan, et al.
1

Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems. While simple to state, this has been a particularly challenging problem in deep learning, where models often end up making overconfident predictions in such situations. In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting. Our approach uses a network with an extra abstention class and is trained on a dataset that is augmented with an uncurated set that consists of a large number of out-of-distribution (OoD) samples that are assigned the label of the abstention class; the model is then trained to learn an effective discriminator between in and out-of-distribution samples. We compare this relatively simple approach against a wide variety of more complex methods that have been proposed both for out-of-distribution detection as well as uncertainty modeling in deep learning, and empirically demonstrate its effectiveness on a wide variety of of benchmarks and deep architectures for image recognition and text classification, often outperforming existing approaches by significant margins. Given the simplicity and effectiveness of this method, we propose that this approach be used as a new additional baseline for future work in this domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2019

Density estimation in representation space to predict model uncertainty

Deep learning models frequently make incorrect predictions with high con...
research
08/06/2022

Towards Robust Deep Learning using Entropic Losses

Current deep learning solutions are well known for not informing whether...
research
02/13/2023

Predicting Class Distribution Shift for Reliable Domain Adaptive Object Detection

Unsupervised Domain Adaptive Object Detection (UDA-OD) uses unlabelled d...
research
05/28/2021

Targeted Deep Learning: Framework, Methods, and Applications

Deep learning systems are typically designed to perform for a wide range...
research
09/19/2022

Two-stage Modeling for Prediction with Confidence

The use of neural networks has been very successful in a wide variety of...
research
01/24/2022

EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients

Few-shot learning aims at leveraging knowledge learned by one or more de...
research
07/13/2023

Classical Out-of-Distribution Detection Methods Benchmark in Text Classification Tasks

State-of-the-art models can perform well in controlled environments, but...

Please sign up or login with your details

Forgot password? Click here to reset