Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

06/09/2020
by   Liqun Shao, et al.
0

In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80 perplexity, with an average of 5.2 research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset