Mode recovery in neural autoregressive sequence modeling

06/10/2021
by   Ilia Kulikov, et al.
5

Despite its wide use, recent studies have revealed unexpected and undesirable properties of neural autoregressive sequence models trained with maximum likelihood, such as an unreasonably high affinity to short sequences after training and to infinitely long sequences at decoding time. We propose to study these phenomena by investigating how the modes, or local maxima, of a distribution are maintained throughout the full learning chain of the ground-truth, empirical, learned and decoding-induced distributions, via the newly proposed mode recovery cost. We design a tractable testbed where we build three types of ground-truth distributions: (1) an LSTM based structured distribution, (2) an unstructured distribution where probability of a sequence does not depend on its content, and (3) a product of these two which we call a semi-structured distribution. Our study reveals both expected and unexpected findings. First, starting with data collection, mode recovery cost strongly relies on the ground-truth distribution and is most costly with the semi-structured distribution. Second, after learning, mode recovery cost from the ground-truth distribution may increase or decrease compared to data collection, with the largest cost degradation occurring with the semi-structured ground-truth distribution. Finally, the ability of the decoding-induced distribution to recover modes from the learned distribution is highly impacted by the choices made earlier in the learning chain. We conclude that future research must consider the entire learning chain in order to fully understand the potentials and perils and to further improve neural autoregressive sequence models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

Characterizing and addressing the issue of oversmoothing in neural autoregressive sequence modeling

Neural autoregressive sequence models smear the probability among many p...
research
05/14/2018

Token-level and sequence-level loss smoothing for RNN language models

Despite the effectiveness of recurrent neural network language models, t...
research
07/11/2022

Grounding Aleatoric Uncertainty in Unsupervised Environment Design

Adaptive curricula in reinforcement learning (RL) have proven effective ...
research
06/28/2017

Generative Bridging Network in Neural Sequence Prediction

Maximum Likelihood Estimation (MLE) suffers from data sparsity problem i...
research
09/03/2023

Representations Matter: Embedding Modes of Large Language Models using Dynamic Mode Decomposition

Existing large language models (LLMs) are known for generating "hallucin...
research
04/10/2014

Open problem: Tightness of maximum likelihood semidefinite relaxations

We have observed an interesting, yet unexplained, phenomenon: Semidefini...

Please sign up or login with your details

Forgot password? Click here to reset