RankMixup: Ranking-Based Mixup Training for Network Calibration

by   Jongyoun Noh, et al.

Network calibration aims to accurately estimate the level of confidences, which is particularly important for employing deep neural networks in real-world systems. Recent approaches leverage mixup to calibrate the network's predictions during training. However, they do not consider the problem that mixtures of labels in mixup may not accurately represent the actual distribution of augmented samples. In this paper, we present RankMixup, a novel mixup-based framework alleviating the problem of the mixture of labels for network calibration. To this end, we propose to use an ordinal ranking relationship between raw and mixup-augmented samples as an alternative supervisory signal to the label mixtures for network calibration. We hypothesize that the network should estimate a higher level of confidence for the raw samples than the augmented ones (Fig.1). To implement this idea, we introduce a mixup-based ranking loss (MRL) that encourages lower confidences for augmented samples compared to raw ones, maintaining the ranking relationship. We also propose to leverage the ranking relationship among multiple mixup-augmented samples to further improve the calibration capability. Augmented samples with larger mixing coefficients are expected to have higher confidences and vice versa (Fig.1). That is, the order of confidences should be aligned with that of mixing coefficients. To this end, we introduce a novel loss, M-NDCG, in order to reduce the number of misaligned pairs of the coefficients and confidences. Extensive experimental results on standard benchmarks for network calibration demonstrate the effectiveness of RankMixup.


Diverse Ensembles Improve Calibration

Modern deep neural networks can produce badly calibrated predictions, es...

ACLS: Adaptive and Conditional Label Smoothing for Network Calibration

We address the problem of network calibration adjusting miscalibrated co...

Can Calibration Improve Sample Prioritization?

Calibration can reduce overconfident predictions of deep neural networks...

Infinite Class Mixup

Mixup is a widely adopted strategy for training deep networks, where add...

SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank

Deep neural classifiers trained with cross-entropy loss (CE loss) often ...

Ranking-Based Siamese Visual Tracking

Current Siamese-based trackers mainly formulate the visual tracking into...

Inducing Hypernym Relationships Based On Order Theory

This paper introduces Strict Partial Order Networks (SPON), a novel neur...

Please sign up or login with your details

Forgot password? Click here to reset