Natural Language Inference with Mixed Effects

10/20/2020
by   William Gantt, et al.
0

There is growing evidence that the prevalence of disagreement in the raw annotations used to construct natural language inference datasets makes the common practice of aggregating those annotations to a single label problematic. We propose a generic method that allows one to skip the aggregation step and train on the raw annotations directly without subjecting the model to unwanted noise that can arise from annotator response biases. We demonstrate that this method, which generalizes the notion of a mixed effects model by incorporating annotator random effects into any existing neural model, improves performance over models that do not incorporate such effects.

READ FULL TEXT
research
07/09/2019

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference

Natural Language Inference (NLI) datasets often contain hypothesis-only ...
research
10/24/2020

ANLIzing the Adversarial Natural Language Inference Dataset

We perform an in-depth error analysis of Adversarial NLI (ANLI), a recen...
research
01/08/2019

Multi-turn Inference Matching Network for Natural Language Inference

Natural Language Inference (NLI) is a fundamental and challenging task i...
research
01/17/2022

Towards a Cleaner Document-Oriented Multilingual Crawled Corpus

The need for raw large raw corpora has dramatically increased in recent ...
research
09/06/2019

Uncertain Natural Language Inference

We propose a refinement of Natural Language Inference (NLI), called Unce...
research
10/03/2020

Mining Knowledge for Natural Language Inference from Wikipedia Categories

Accurate lexical entailment (LE) and natural language inference (NLI) of...

Please sign up or login with your details

Forgot password? Click here to reset