Marian: Cost-effective High-Quality Neural Machine Translation in C++

05/30/2018
by   Marcin Junczys-Dowmunt, et al.
0

This paper describes the submissions of the "Marian" team to the WNMT 2018 shared task. We investigate combinations of teacher-student training, low-precision matrix products, auto-tuning and other methods to optimize the Transformer model on GPU and CPU. By further integrating these methods with the new averaging attention networks, a recently introduced faster Transformer variant, we create a number of high-quality, high-performance models on the GPU and CPU, dominating the Pareto frontier for this shared task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset