Ensemble-based Uncertainty Quantification: Bayesian versus Credal Inference

07/21/2021
by   Mohammad Hossein Shaker, et al.
0

The idea to distinguish and quantify two important types of uncertainty, often referred to as aleatoric and epistemic, has received increasing attention in machine learning research in the last couple of years. In this paper, we consider ensemble-based approaches to uncertainty quantification. Distinguishing between different types of uncertainty-aware learning algorithms, we specifically focus on Bayesian methods and approaches based on so-called credal sets, which naturally suggest themselves from an ensemble learning point of view. For both approaches, we address the question of how to quantify aleatoric and epistemic uncertainty. The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2021

Quantifying Epistemic Uncertainty in Deep Learning

Uncertainty quantification is at the core of the reliability and robustn...
research
02/19/2023

Imprecise Bayesian Neural Networks

Uncertainty quantification and robustness to distribution shifts are imp...
research
06/16/2023

Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?

Adequate uncertainty representation and quantification have become imper...
research
06/23/2021

Bayesian Deep Learning Hyperparameter Search for Robust Function Mapping to Polynomials with Noise

Advances in neural architecture search, as well as explainability and in...
research
09/19/2022

Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields

We show that ensembling effectively quantifies model uncertainty in Neur...
research
05/09/2023

Fully Bayesian VIB-DeepSSM

Statistical shape modeling (SSM) enables population-based quantitative a...

Please sign up or login with your details

Forgot password? Click here to reset