Calibrated Selective Classification

08/25/2022
by   Adam Fisch, et al.
3

Selective classification allows models to abstain from making predictions (e.g., say "I don't know") when in doubt in order to obtain better effective accuracy. While typical selective models can be effective at producing more accurate predictions on average, they may still allow for wrong predictions that have high confidence, or skip correct predictions that have low confidence. Providing calibrated uncertainty estimates alongside predictions – probabilities that correspond to true frequencies – can be as important as having predictions that are simply accurate on average. However, uncertainty estimates can be unreliable for certain inputs. In this paper, we develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties. By doing so, we aim to make predictions with well-calibrated uncertainty estimates over the distribution of accepted examples, a property we call selective calibration. We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model. In particular, our work focuses on achieving robust calibration, where the model is intentionally designed to be tested on out-of-domain data. We achieve this through a training strategy inspired by distributionally robust optimization, in which we apply simulated input perturbations to the known, in-domain training data. We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.

READ FULL TEXT
research
12/14/2020

Improving model calibration with accuracy versus uncertainty optimization

Obtaining reliable and accurate quantification of uncertainty estimates ...
research
03/21/2017

Overcoming model simplifications when quantifying predictive uncertainty

It is generally accepted that all models are wrong -- the difficulty is ...
research
03/17/2020

Calibration of Pre-trained Transformers

Pre-trained Transformers are now ubiquitous in natural language processi...
research
12/22/2017

Obtaining Accurate Probabilistic Causal Inference by Post-Processing Calibration

Discovery of an accurate causal Bayesian network structure from observat...
research
10/27/2020

Selective Classification Can Magnify Disparities Across Groups

Selective classification, in which models are allowed to abstain on unce...
research
09/16/2022

On the Relation between Sensitivity and Accuracy in In-context Learning

In-context learning (ICL) suffers from oversensitivity to the prompt, wh...
research
03/19/2020

Uncertainty Estimation in Cancer Survival Prediction

Survival models are used in various fields, such as the development of c...

Please sign up or login with your details

Forgot password? Click here to reset