WMRB: Learning to Rank in a Scalable Batch Training Approach

11/10/2017
by   Kuan Liu, et al.
0

We propose a new learning to rank algorithm, named Weighted Margin-Rank Batch loss (WMRB), to extend the popular Weighted Approximate-Rank Pairwise loss (WARP). WMRB uses a new rank estimator and an efficient batch training algorithm. The approach allows more accurate item rank approximation and explicit utilization of parallel computation to accelerate training. In three item recommendation tasks, WMRB consistently outperforms WARP and other baselines. Moreover, WMRB shows clear time efficiency advantages as data scale increases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset