Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference

11/23/2022
by   Frano Rajic, et al.
0

There is no such thing as a perfect dataset. In some datasets, deep neural networks discover underlying heuristics that allow them to take shortcuts in the learning process, resulting in poor generalization capability. Instead of using standard cross-entropy, we explore whether a modulated version of cross-entropy called focal loss can constrain the model so as not to use heuristics and improve generalization performance. Our experiments in natural language inference show that focal loss has a regularizing impact on the learning process, increasing accuracy on out-of-distribution data, but slightly decreasing performance on in-distribution data. Despite the improved out-of-distribution performance, we demonstrate the shortcomings of focal loss and its inferiority in comparison to the performance of methods such as unbiased focal loss and self-debiasing ensembles.

READ FULL TEXT
research
12/15/2020

Neural Collapse with Cross-Entropy Loss

We consider the variational problem of cross-entropy loss with n feature...
research
12/23/2018

Leveraging Class Similarity to Improve Deep Neural Network Robustness

Traditionally artificial neural networks (ANNs) are trained by minimizin...
research
05/17/2023

FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy

Measuring the distance between machine-produced and human language is a ...
research
06/29/2022

RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness

We show that the effectiveness of the well celebrated Mixup [Zhang et al...
research
07/25/2018

A Surprising Linear Relationship Predicts Test Performance in Deep Networks

Given two networks with the same training loss on a dataset, when would ...
research
09/12/2021

Mixing between the Cross Entropy and the Expectation Loss Terms

The cross entropy loss is widely used due to its effectiveness and solid...
research
02/17/2023

A Simplistic Model of Neural Scaling Laws: Multiperiodic Santa Fe Processes

It was observed that large language models exhibit a power-law decay of ...

Please sign up or login with your details

Forgot password? Click here to reset