Improving Classifier Confidence using Lossy Label-Invariant Transformations

11/09/2020
by   Sooyong Jang, et al.
0

Providing reliable model uncertainty estimates is imperative to enabling robust decision making by autonomous agents and humans alike. While recently there have been significant advances in confidence calibration for trained models, examples with poor calibration persist in most calibrated models. Consequently, multiple techniques have been proposed that leverage label-invariant transformations of the input (i.e., an input manifold) to improve worst-case confidence calibration. However, manifold-based confidence calibration techniques generally do not scale and/or require expensive retraining when applied to models with large input spaces (e.g., ImageNet). In this paper, we present the recursive lossy label-invariant calibration (ReCal) technique that leverages label-invariant transformations of the input that induce a loss of discriminatory information to recursively group (and calibrate) inputs - without requiring model retraining. We show that ReCal outperforms other calibration methods on multiple datasets, especially, on large-scale datasets such as ImageNet.

READ FULL TEXT
research
02/25/2021

Confidence Calibration with Bounded Error Using Transformations

As machine learning techniques become widely adopted in new domains, esp...
research
03/17/2022

Confidence Calibration for Intent Detection via Hyperspherical Space and Rebalanced Accuracy-Uncertainty Loss

Data-driven methods have achieved notable performance on intent detectio...
research
06/22/2023

Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs

The task of empowering large language models (LLMs) to accurately expres...
research
02/23/2020

On the Role of Dataset Quality and Heterogeneity in Model Confidence

Safety-critical applications require machine learning models that output...
research
04/02/2018

Confidence from Invariance to Image Transformations

We develop a technique for automatically detecting the classification er...
research
07/18/2021

Top-label calibration

We study the problem of post-hoc calibration for multiclass classificati...
research
12/16/2019

On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration

Uncertainty estimates help to identify ambiguous, novel, or anomalous in...

Please sign up or login with your details

Forgot password? Click here to reset