Sparse Graph Attention Networks

12/02/2019
by   Yang Ye, et al.
0

Graph Neural Networks (GNNs) have proved to be an effective representation learning framework for graph-structured data, and have achieved state-of-the-art performance on all sorts of practical tasks, such as node classification, link prediction and graph classification. Among the variants of GNNs, Graph Attention Networks (GATs) learn to assign dense attention coefficients over all neighbors of a node for feature aggregation, and improve the performance of many graph learning tasks. However, real-world graphs are often very large and noisy, and GATs are plagued to overfitting if not regularized properly. In this paper, we propose Sparse Graph Attention Networks (SGATs) that learn sparse attention coefficients under an L_0-norm regularization, and the learned sparse attentions are then used for all GNN layers, resulting in an edge-sparsified graph. By doing so, we can identify noisy / insignificant edges, and thus focus computation on more important portion of a graph. Extensive experiments on synthetic and real-world graph learning benchmarks demonstrate the superior performance of SGATs. In particular, SGATs can remove about 50%-80% edges from large graphs, such as PPI and Reddit, while retaining similar classification accuracies. Furthermore, the removed edges can be interpreted intuitively and quantitatively. To the best of our knowledge, this is the first graph learning algorithm that sparsifies graphs for the purpose of identifying important relationship between nodes and for robust training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2020

Learning to Drop: Robust Graph Neural Network via Topological Denoising

Graph Neural Networks (GNNs) have shown to be powerful tools for graph a...
research
11/10/2021

LSP : Acceleration and Regularization of Graph Neural Networks via Locality Sensitive Pruning of Graphs

Graph Neural Networks (GNNs) have emerged as highly successful tools for...
research
06/01/2023

SpotTarget: Rethinking the Effect of Target Edges for Link Prediction in Graph Neural Networks

Graph Neural Networks (GNNs) have demonstrated promising outcomes across...
research
07/19/2020

Robust Hierarchical Graph Classification with Subgraph Attention

Graph neural networks get significant attention for graph representation...
research
03/29/2023

GRAF: Graph Attention-aware Fusion Networks

A large number of real-world networks include multiple types of nodes an...
research
03/24/2023

Structural Imbalance Aware Graph Augmentation Learning

Graph machine learning (GML) has made great progress in node classificat...
research
09/11/2022

Towards Sparsification of Graph Neural Networks

As real-world graphs expand in size, larger GNN models with billions of ...

Please sign up or login with your details

Forgot password? Click here to reset