Regularizing Neural Networks by Penalizing Confident Output Distributions

01/23/2017
by   Gabriel Pereyra, et al.
0

We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2022

Rethinking Label Smoothing on Multi-hop Question Answering

Label smoothing is a regularization technique widely used in supervised ...
research
05/02/2022

The Implicit Length Bias of Label Smoothing on Beam Search Decoding

Label smoothing is ubiquitously applied in Neural Machine Translation (N...
research
05/02/2020

Generalized Entropy Regularization or: There's Nothing Special about Label Smoothing

Prior work has explored directly regularizing the output distributions o...
research
05/07/2020

ProSelfLC: Progressive Self Label Correction for Target Revising in Label Noise

In this work, we address robust deep learning under label noise (semi-su...
research
01/28/2022

Calibrating Histopathology Image Classifiers using Label Smoothing

The classification of histopathology images fundamentally differs from t...
research
03/11/2023

Stabilizing Transformer Training by Preventing Attention Entropy Collapse

Training stability is of great importance to Transformers. In this work,...
research
02/13/2021

Capturing Label Distribution: A Case Study in NLI

We study estimating inherent human disagreement (annotation label distri...

Please sign up or login with your details

Forgot password? Click here to reset