Neural Sequence Model Training via α-divergence Minimization

06/30/2017
by   Sotetsu Koyamada, et al.
0

We propose a new neural sequence model training method in which the objective function is defined by α-divergence. We demonstrate that the objective function generalizes the maximum-likelihood (ML)-based and reinforcement learning (RL)-based objective functions as special cases (i.e., ML corresponds to α→ 0 and RL to α→1). We also show that the gradient of the objective function can be considered a mixture of ML- and RL-based objective gradients. The experimental results of a machine translation task show that minimizing the objective function with α > 0 outperforms α→ 0, which corresponds to ML-based methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset