Neural Random Projections for Language Modelling

by   Davide Nunes, et al.

Neural network-based language models deal with data sparsity problems by mapping the large discrete space of words into a smaller continuous space of real-valued vectors. By learning distributed vector representations for words, each training sample informs the neural network model about a combinatorial number of other patterns. We exploit the sparsity in natural language even further by encoding each unique input word using a reduced sparse random representation. In this paper, we propose an encoder for discrete inputs that uses random projections to allow for the learning of language models using significantly smaller parameter spaces when compared with similar neural network architectures. Furthermore, random projections also eliminate the dependency between a neural network architecture and the size of a pre-established dictionary. We investigate the properties of our encoding mechanism empirically, by evaluating its performance on the widely used Penn Treebank corpus, using several configurations of baseline feedforward neural network models. We show that guaranteeing approximately equidistant inner products between representations of unique discrete inputs is enough to provide the neural network model with enough information to learn useful distributed representations for these inputs. By not requiring prior enumeration of the lexicon, random projections allow us to face the dynamic and open character of natural languages.


page 1

page 2

page 3

page 4


Compressing Neural Language Models by Sparse Word Representations

Neural networks are among the state-of-the-art techniques for language m...

A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models

In this paper, we propose the new fixed-size ordinally-forgetting encodi...

Continuous multilinguality with language vectors

Most existing models for multilingual natural language processing (NLP) ...

A Neural Network Approach for Mixing Language Models

The performance of Neural Network (NN)-based language models is steadily...

Alleviating Overfitting for Polysemous Words for Word Representation Estimation Using Lexicons

Though there are some works on improving distributed word representation...

Transferable Neural Projection Representations

Neural word representations are at the core of many state-of-the-art nat...

Learning distributed representations of graphs with Geo2DR

We present Geo2DR, a Python library for unsupervised learning on graph-s...

Please sign up or login with your details

Forgot password? Click here to reset