Global-Context Neural Machine Translation through Target-Side Attentive Residual Connections

09/14/2017
by   Lesly Miculicich Werlen, et al.
0

Neural sequence-to-sequence models achieve remarkable performance not only in machine translation (MT) but also in other language processing tasks. One of the reasons for their effectiveness is the ability of their decoder to capture contextual information through its recurrent layer. However, its sequential modeling over-emphasizes the local context, i.e. the previously translated word in the case of MT. As a result, the model ignores important information from the global context of the translation. In this paper, we address this limitation by introducing attentive residual connections from the previously translated words to the output of the decoder, which enable the learning of longer-range dependencies between words. The proposed model can emphasize any of the previously translated words, as opposed to only the last one, gaining access to the global context of the translated text. The model outperforms strong neural MT baselines on three language pairs, as well as a neural language modeling baseline. The analysis of the attention learned by the decoder confirms that it emphasizes a wide context, and reveals resemblance to syntactic-like structures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset