Efficient Inference For Neural Machine Translation

10/06/2020
by   Yi-Te Hsu, et al.
0

Large Transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field. In this work, we look for the optimal combination of known techniques to optimize inference speed without sacrificing translation quality. We conduct an empirical study that stacks various approaches and demonstrates that combination of replacing decoder self-attention with simplified recurrent units, adopting a deep encoder and a shallow decoder architecture and multi-head attention pruning can achieve up to 109 parameters by 25 BLEU.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset