Diverse Embedding Neural Network Language Models

12/22/2014
by   Kartik Audhkhasi, et al.
0

We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs). A DENNLM projects the input word history vector onto multiple diverse low-dimensional sub-spaces instead of a single higher-dimensional sub-space as in conventional feed-forward neural network LMs. We encourage these sub-spaces to be diverse during network training through an augmented loss function. Our language modeling experiments on the Penn Treebank data set show the performance benefit of using a DENNLM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2016

Using the Output Embedding to Improve Language Models

We study the topmost weight matrix of neural network language models. We...
research
08/24/2017

A Study on Neural Network Language Modeling

An exhaustive study on neural network language modeling (NNLM) is perfor...
research
09/05/2016

PMI Matrix Approximations with Applications to Neural Language Modeling

The negative sampling (NEG) objective function, used in word2vec, is a s...
research
05/05/2020

Stolen Probability: A Structural Weakness of Neural Language Models

Neural Network Language Models (NNLMs) generate probability distribution...
research
02/06/2021

Extremal learning: extremizing the output of a neural network in regression problems

Neural networks allow us to model complex relationships between variable...
research
04/21/2017

Improving Context Aware Language Models

Increased adaptability of RNN language models leads to improved predicti...
research
06/20/2019

Low-dimensional Embodied Semantics for Music and Language

Embodied cognition states that semantics is encoded in the brain as firi...

Please sign up or login with your details

Forgot password? Click here to reset