A Systematic Assessment of Syntactic Generalization in Neural Language Models

05/07/2020
by   Jennifer Hu, et al.
0

State-of-the-art neural network models have achieved dizzyingly low perplexity scores on major language modeling benchmarks, but it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 syntactic test suites. We find that model architecture clearly influences syntactic generalization performance: Transformer models and models with explicit hierarchical structure reliably outperform pure sequence models in their predictions. In contrast, we find no clear influence of the scale of training data on these syntactic generalization tests. We also find no clear relation between a model's perplexity and its syntactic generalization performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset