Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation

06/10/2018
by   Fahim Dalvi, et al.
0

We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention. We propose a tunable agent which decides the best segmentation strategy for a user-defined BLEU loss and Average Proportion (AP) constraint. Our agent outperforms previously proposed Wait-if-diff and Wait-if-worse agents (Cho and Esipova, 2016) on BLEU with a lower latency. Secondly we proposed data-driven changes to Neural MT training to better match the incremental decoding framework.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset