An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models

07/14/2020
by   Lifu Tu, et al.
0

Recent work has shown that pre-trained language models such as BERT improve robustness to spurious correlations in the dataset. Intrigued by these results, we find that the key to their success is generalization from a small amount of counterexamples where the spurious correlations do not hold. When such minority examples are scarce, pre-trained models perform as poorly as models trained from scratch. In the case of extreme minority, we propose to use multi-task learning (MTL) to improve generalization. Our experiments on natural language inference and paraphrase identification show that MTL with the right auxiliary tasks significantly improves performance on challenging examples without hurting the in-distribution performance. Further, we show that the gain from MTL mainly comes from improved generalization from the minority examples. Our results highlight the importance of data diversity for overcoming spurious correlations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

Eliminating Spurious Correlations from Pre-trained Models via Data Mixing

Machine learning models pre-trained on large datasets have achieved rema...
research
05/25/2021

Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization

The Lottery Ticket Hypothesis suggests that an over-parametrized network...
research
10/12/2020

Measuring and Reducing Gendered Correlations in Pre-trained Models

Pre-trained models have revolutionized natural language understanding. H...
research
07/19/2023

Improving Pre-trained Language Models' Generalization

The reusability of state-of-the-art Pre-trained Language Models (PLMs) i...
research
12/08/2021

The Effect of Model Size on Worst-Group Generalization

Overparameterization is shown to result in poor test accuracy on rare su...
research
03/25/2019

Recognizing Arrow Of Time In The Short Stories

Recognizing arrow of time in short stories is a challenging task. i.e., ...
research
05/30/2021

Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice

Pre-trained contextualized language models (PrLMs) have led to strong pe...

Please sign up or login with your details

Forgot password? Click here to reset