Just Add Functions: A Neural-Symbolic Language Model

12/11/2019
by   David Demeter, et al.
0

Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and are second-nature for human readers. Yet, in many cases, these relationships can be encoded with simple mathematical or logical expressions. How can we augment today's neural models with such encodings? In this paper, we propose a general methodology to enhance the inductive bias of NNLMs by incorporating simple functions into a neural architecture to form a hierarchical neural-symbolic language model (NSLM). These functions explicitly encode symbolic deterministic relationships to form probability distributions over words. We explore the effectiveness of this approach on numbers and geographic locations, and show that NSLMs significantly reduce perplexity in small-corpus language modeling, and that the performance improvement persists for rare tokens even on much larger corpora. The approach is simple and general, and we discuss how it can be applied to other word classes beyond numbers and geography.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2021

A Cognitive Regularizer for Language Modeling

The uniform information density (UID) hypothesis, which posits that spea...
research
12/19/2022

APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning

Logical reasoning of text is an important ability that requires understa...
research
03/07/2019

Neural Language Modeling with Visual Features

Multimodal language models attempt to incorporate non-linguistic feature...
research
08/01/2016

A Neural Knowledge Language Model

Current language models have a significant limitation in the ability to ...
research
05/05/2020

Stolen Probability: A Structural Weakness of Neural Language Models

Neural Network Language Models (NNLMs) generate probability distribution...
research
06/22/2020

Clinical Predictive Keyboard using Statistical and Neural Language Modeling

A language model can be used to predict the next word during authoring, ...
research
10/28/2020

A Visuospatial Dataset for Naturalistic Verb Learning

We introduce a new dataset for training and evaluating grounded language...

Please sign up or login with your details

Forgot password? Click here to reset