Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

04/30/2020
by   Hitomi Yanaka, et al.
0

Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition. We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits. A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets. However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set. This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.

READ FULL TEXT
research
04/27/2019

HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning

Large crowdsourced datasets are widely used for training and evaluating ...
research
06/15/2019

Can neural networks understand monotonicity reasoning?

Monotonicity reasoning is one of the important reasoning skills for any ...
research
05/06/2018

Breaking NLI Systems with Sentences that Require Simple Lexical Inferences

We create a new NLI test set that shows the deficiency of state-of-the-a...
research
01/26/2021

Exploring Transitivity in Neural NLI Models through Veridicality

Despite the recent success of deep neural networks in natural language p...
research
08/19/2018

Lexicosyntactic Inference in Neural Models

We investigate neural models' ability to capture lexicosyntactic inferen...
research
08/31/2019

A Logic-Driven Framework for Consistency of Neural Models

While neural models show remarkable accuracy on individual predictions, ...
research
11/10/2019

Location Attention for Extrapolation to Longer Sequences

Neural networks are surprisingly good at interpolating and perform remar...

Please sign up or login with your details

Forgot password? Click here to reset