Incremental Adaptation Strategies for Neural Network Language Models

12/20/2014
by   Aram Ter-Sarkisov, et al.
1

It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset