Label Smoothing is Robustification against Model Misspecification

05/15/2023
by   Ryoya Yamasaki, et al.
0

Label smoothing (LS) adopts smoothed targets in classification tasks. For example, in binary classification, instead of the one-hot target (1,0)^⊤ used in conventional logistic regression (LR), LR with LS (LSLR) uses the smoothed target (1-α/2,α/2)^⊤ with a smoothing level α∈(0,1), which causes squeezing of values of the logit. Apart from the common regularization-based interpretation of LS that leads to an inconsistent probability estimator, we regard LSLR as modifying the loss function and consistent estimator for probability estimation. In order to study the significance of each of these two modifications by LSLR, we introduce a modified LSLR (MLSLR) that uses the same loss function as LSLR and the same consistent estimator as LR, while not squeezing the logits. For the loss function modification, we theoretically show that MLSLR with a larger smoothing level has lower efficiency with correctly-specified models, while it exhibits higher robustness against model misspecification than LR. Also, for the modification of the probability estimator, an experimental comparison between LSLR and MLSLR showed that this modification and squeezing of the logits in LSLR have negative effects on the probability estimation and classification performance. The understanding of the properties of LS provided by these comparisons allows us to propose MLSLR as an improvement over LSLR.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset