Deep Residual Output Layers for Neural Language Generation

05/14/2019
by   Nikolaos Pappas, et al.
0

Many tasks, including language generation, benefit from learning the structure of the output space, particularly when the space of output labels is large and the data is sparse. State-of-the-art neural language models indirectly capture the output space structure in their classifier weights since they lack parameter sharing across output labels. Learning shared output label mappings helps, but existing methods have limited expressivity and are prone to overfitting. In this paper, we investigate the usefulness of more powerful shared mappings for output labels, and propose a deep residual output mapping with dropout between layers to better capture the structure of the output space and avoid overfitting. Evaluations on three language generation tasks show that our output label mapping can match or improve state-of-the-art recurrent and self-attention architectures, and suggest that the classifier does not necessarily need to be high-rank to better model natural language if it is better at capturing the structure of the output space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2022

Entangled Residual Mappings

Residual mappings have been shown to perform representation learning in ...
research
07/26/2017

Reduction of Overfitting in Diabetes Prediction Using Deep Learning Neural Network

Augmented accuracy in prediction of diabetes will open up new frontiers ...
research
11/27/2017

Slim Embedding Layers for Recurrent Neural Language Models

Recurrent neural language models are the state-of-the-art models for lan...
research
03/07/2023

Larger language models do in-context learning differently

We study how in-context learning (ICL) in language models is affected by...
research
11/19/2019

Thick-Net: Parallel Network Structure for Sequential Modeling

Recurrent neural networks have been widely used in sequence learning tas...
research
05/15/2023

Symbol tuning improves in-context learning in language models

We present symbol tuning - finetuning language models on in-context inpu...
research
03/02/2023

Mixture of Soft Prompts for Controllable Data Generation

Large language models (LLMs) effectively generate fluent text when the t...

Please sign up or login with your details

Forgot password? Click here to reset