DeepAI AI Chat
Log In Sign Up

On Calibration of Ensemble-Based Credal Predictors

by   Thomas Mortier, et al.

In recent years, several classification methods that intend to quantify epistemic uncertainty have been proposed, either by producing predictions in the form of second-order distributions or sets of probability distributions. In this work, we focus on the latter, also called credal predictors, and address the question of how to evaluate them: What does it mean that a credal predictor represents epistemic uncertainty in a faithful manner? To answer this question, we refer to the notion of calibration of probabilistic predictors and extend it to credal predictors. Broadly speaking, we call a credal predictor calibrated if it returns sets that cover the true conditional probability distribution. To verify this property for the important case of ensemble-based credal predictors, we propose a novel nonparametric calibration test that generalizes an existing test for probabilistic predictors to the case of credal predictors. Making use of this test, we empirically show that credal predictors based on deep neural networks are often not well calibrated.


page 1

page 2

page 3

page 4


Scaffolding Sets

Predictors map individual instances in a population to the interval [0,1...

Causal isotonic calibration for heterogeneous treatment effects

We propose causal isotonic calibration, a novel nonparametric method for...

A Novel Dual Predictors Framework of PEE

In this paper, we propose a improved 2D-PEH based on double prediction-e...

On Second-Order Scoring Rules for Epistemic Uncertainty Quantification

It is well known that accurate probabilistic predictors can be trained t...

When Does Optimizing a Proper Loss Yield Calibration?

Optimizing proper loss functions is popularly believed to yield predicto...

Probabilistic Alternatives to the Gower Distance: A Note on Deodata Predictors

A probabilistic alternative to the Gower distance is proposed. The proba...