Scaling R-GCN Training with Graph Summarization
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph. The amount of gradient information that needs to be stored during training for real-world graphs is often too large for the amount of memory available on most GPUs. In this work, we experiment with the use of graph summarization techniques to compress the graph and hence reduce the amount of memory needed. After training the R-GCN on the graph summary, we transfer the weights back to the original graph and attempt to perform inference on it. We obtain reasonable results on the AIFB, MUTAG and AM datasets. This supports the importance and relevancy of graph summarization methods, whose smaller graph representations scale and reduce the computational overhead involved with novel machine learning models dealing with large Knowledge Graphs. However, further experiments are needed to evaluate whether this also holds true for very large graphs.
READ FULL TEXT