You May Not Need Attention

10/31/2018
by   Ofir Press, et al.
0

In NMT, how far can we get without attention and without separate encoding and decoding? To answer that question, we introduce a recurrent neural translation model that does not use attention and does not have a separate encoder and decoder. Our eager translation model is low-latency, writing target tokens as soon as it reads the first source token, and uses constant memory during decoding. It performs on par with the standard attention-based model of Bahdanau et al. (2014), and better on long sentences.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset