Are Graph Neural Networks Really Helpful for Knowledge Graph Completion?

05/21/2022
by   Juanhui Li, et al.
15

Knowledge graphs (KGs) facilitate a wide variety of applications due to their ability to store relational knowledge applicable to many areas. Despite great efforts invested in creation and maintenance, even the largest KGs are far from complete. Hence, KG completion (KGC) has become one of the most crucial tasks for KG research. Recently, considerable literature in this space has centered around the use of Graph Neural Networks (GNNs) to learn powerful embeddings which leverage topological structures in the KGs. Specifically, dedicated efforts have been made to extend GNNs, which are commonly designed for simple homogeneous and uni-relational graphs, to the KG context which has diverse and multi-relational connections between entities, by designing more complex aggregation schemes over neighboring nodes (crucial to GNN performance) to appropriately leverage multi-relational information. The success of these methods is naturally attributed to the use of GNNs over simpler multi-layer perceptron (MLP) models, owing to their additional aggregation functionality. In this work, we find that surprisingly, simple MLP models are able to achieve comparable performance to GNNs, suggesting that aggregation may not be as crucial as previously believed. With further exploration, we show careful scoring function and loss function design has a much stronger influence on KGC model performance, and aggregation is not practically required. This suggests a conflation of scoring function design, loss function design, and aggregation in prior work, with promising insights regarding the scalability of state-of-the-art KGC methods today, as well as careful attention to more suitable aggregation designs for KGC tasks tomorrow.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset