Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

06/06/2019
by   Yaniv Ovadia, et al.
9

Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.

READ FULL TEXT

page 7

page 14

page 15

page 16

research
07/23/2021

Estimating Predictive Uncertainty Under Program Data Distribution Shift

Deep learning (DL) techniques have achieved great success in predictive ...
research
11/07/2021

Uncertainty Calibration for Ensemble-Based Debiasing Methods

Ensemble-based debiasing methods have been shown effective in mitigating...
research
05/23/2019

Leveraging Uncertainty in Deep Learning for Selective Classification

The wide and rapid adoption of deep learning by practitioners brought un...
research
10/14/2020

Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit

Modern deep learning models have achieved great success in predictive ac...
research
04/17/2023

On Uncertainty Calibration and Selective Generation in Probabilistic Neural Summarization: A Benchmark Study

Modern deep models for summarization attains impressive benchmark perfor...
research
06/19/2020

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

Covariate shift has been shown to sharply degrade both predictive accura...
research
12/22/2019

A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks

Evaluation of Bayesian deep learning (BDL) methods is challenging. We of...

Please sign up or login with your details

Forgot password? Click here to reset