On Offline Evaluation of Recommender Systems

10/21/2020
by   Yitong Ji, et al.
0

In academic research, recommender models are often evaluated offline on benchmark datasets. The offline dataset is first split to train and test instances. All training instances are then modeled in a user-item interaction matrix, and supervised learning models are trained. Many such offline evaluations ignore the global timeline in the data, which leads to "data leakage": a model learns from future data to predict a current value, making the evaluation unrealistic. In this paper, we evaluate the impact of "data leakage" using two widely adopted baseline models, BPR and NeuMF, on MovieLens dataset. We show that accessing to different amount of future data may improve or deteriorate a model's recommendation accuracy. That is, ignoring the global timeline in offline evaluation makes the performance among recommendation models not comparable. Our experiments also show that more historical data in training set does not necessarily lead to better recommendation accuracy. We share our understanding of these observations and highlight the importance of preserving the global timeline. We also call for a revisit of recommender system offline evaluation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset