Online Learning Meets Machine Translation Evaluation: Finding the Best Systems with the Least Human Effort

05/27/2021
by   Vânia Mendonça, et al.
0

In Machine Translation, assessing the quality of a large amount of automatic translations can be challenging. Automatic metrics are not reliable when it comes to high performing systems. In addition, resorting to human evaluators can be expensive, especially when evaluating multiple systems. To overcome the latter challenge, we propose a novel application of online learning that, given an ensemble of Machine Translation systems, dynamically converges to the best systems, by taking advantage of the human feedback available. Our experiments on WMT'19 datasets show that our online approach quickly converges to the top-3 ranked systems for the language pairs considered, despite the lack of human feedback for many translations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset