Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair

11/16/2021
by   Jason Phang, et al.
12

More capable language models increasingly saturate existing task benchmarks, in some cases outperforming humans. This has left little headroom with which to measure further progress. Adversarial dataset creation has been proposed as a strategy to construct more challenging datasets, and two common approaches are: (1) filtering out easy examples and (2) model-in-the-loop data collection. In this work, we study the impact of applying each approach to create more challenging evaluation datasets. We adapt the AFLite algorithm to filter evaluation data, and run experiments against 18 different adversary models. We find that AFLite indeed selects more challenging examples, lowering the performance of evaluated models more as stronger adversary models are used. However, the resulting ranking of models can also be unstable and highly sensitive to the choice of adversary model used. Moreover, AFLite oversamples examples with low annotator agreement, meaning that model comparisons hinge on the most contentiously labeled examples. Smaller-scale experiments on the adversarially collected datasets ANLI and AdversarialQA show similar findings, broadly lowering performance with stronger adversaries while disproportionately affecting the adversary model.

READ FULL TEXT

page 5

page 7

page 8

page 15

page 16

research
11/08/2019

Theoretical Guarantees for Model Auditing with Finite Adversaries

Privacy concerns have led to the development of privacy-preserving appro...
research
01/26/2021

Property Inference From Poisoning

Property inference attacks consider an adversary who has access to the t...
research
05/18/2021

On the Robustness of Domain Constraints

Machine learning is vulnerable to adversarial examples-inputs designed t...
research
08/02/2019

AdvGAN++ : Harnessing latent layers for adversary generation

Adversarial examples are fabricated examples, indistinguishable from the...
research
06/02/2021

On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study

In adversarial data collection (ADC), a human workforce interacts with a...
research
05/02/2020

DQI: Measuring Data Quality in NLP

Neural language models have achieved human level performance across seve...
research
04/18/2021

Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation

Despite the availability of very large datasets and pretrained models, s...

Please sign up or login with your details

Forgot password? Click here to reset