Mixing between the Cross Entropy and the Expectation Loss Terms

09/12/2021
by   Barak Battash, et al.
0

The cross entropy loss is widely used due to its effectiveness and solid theoretical grounding. However, as training progresses, the loss tends to focus on hard to classify samples, which may prevent the network from obtaining gains in performance. While most work in the field suggest ways to classify hard negatives, we suggest to strategically leave hard negatives behind, in order to focus on misclassified samples with higher probabilities. We show that adding to the optimization goal the expectation loss, which is a better approximation of the zero-one loss, helps the network to achieve better accuracy. We, therefore, propose to shift between the two losses during training, focusing more on the expectation loss gradually during the later stages of training. Our experiments show that the new training protocol improves performance across a diverse set of classification domains, including computer vision, natural language processing, tabular data, and sequences. Our code and scripts are available at supplementary.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/18/2019

On the relation between Loss Functions and T-Norms

Deep learning has been shown to achieve impressive results in several do...
research
02/08/2023

Cut your Losses with Squentropy

Nearly all practical neural models for classification are trained using ...
research
11/23/2022

Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference

There is no such thing as a perfect dataset. In some datasets, deep neur...
research
06/12/2020

Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks

Modern neural architectures for classification tasks are trained using t...
research
07/22/2021

Improving Polyphonic Sound Event Detection on Multichannel Recordings with the Sørensen-Dice Coefficient Loss and Transfer Learning

The Sørensen–Dice Coefficient has recently seen rising popularity as a l...
research
05/20/2018

Algorithms and Theory for Multiple-Source Adaptation

This work includes a number of novel contributions for the multiple-sour...
research
10/29/2022

Reformulating van Rijsbergen's F_β metric for weighted binary cross-entropy

The separation of performance metrics from gradient based loss functions...

Please sign up or login with your details

Forgot password? Click here to reset