Knowledge Base Completion: Baseline strikes back (Again)

05/02/2020
by   Prachi Jain, et al.
0

Knowledge Base Completion has been a very active area recently, where multiplicative models have generally outperformed additive and other deep learning methods – like GNN, CNN, path-based models. Several recent KBC papers propose architectural changes, new training methods, or even a new problem reformulation. They evaluate their methods on standard benchmark datasets - FB15k, FB15k-237, WN18, WN18RR, and Yago3-10. Recently, some papers discussed how 1-N scoring can speed up training and evaluation. In this paper, we discuss how by just applying this training regime to a basic model like Complex gives near SOTA performance on all the datasets – we call this model COMPLEX-V2. We also highlight how various multiplicative methods recently proposed in literature benefit from this trick and become indistinguishable in terms of performance on most datasets. This paper calls for a reassessment of their individual value, in light of these findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset