Faster Transformer Decoding: N-gram Masked Self-Attention

01/14/2020
by   Ciprian Chelba, et al.
0

Motivated by the fact that most of the information relevant to the prediction of target tokens is drawn from the source sentence S=s_1, ..., s_S, we propose truncating the target-side window used for computing self-attention by making an N-gram assumption. Experiments on WMT EnDe and EnFr data sets show that the N-gram masked self-attention model loses very little in BLEU score for N values in the range 4, ..., 8, depending on the task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro