Rethinking Graph Lottery Tickets: Graph Sparsity Matters

05/03/2023
by   Bo Hui, et al.
0

Lottery Ticket Hypothesis (LTH) claims the existence of a winning ticket (i.e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance to the original dense network. A recent work, called UGS, extended LTH to prune graph neural networks (GNNs) for effectively accelerating GNN inference. UGS simultaneously prunes the graph adjacency matrix and the model weights using the same masking mechanism, but since the roles of the graph adjacency matrix and the weight matrices are very different, we find that their sparsifications lead to different performance characteristics. Specifically, we find that the performance of a sparsified GNN degrades significantly when the graph sparsity goes beyond a certain extent. Therefore, we propose two techniques to improve GNN performance when the graph sparsity is high. First, UGS prunes the adjacency matrix using a loss formulation which, however, does not properly involve all elements of the adjacency matrix; in contrast, we add a new auxiliary loss head to better guide the edge pruning by involving the entire adjacency matrix. Second, by regarding unfavorable graph sparsification as adversarial data perturbations, we formulate the pruning process as a min-max optimization problem to gain the robustness of lottery tickets when the graph sparsity is high. We further investigate the question: Can the "retrainable" winning ticket of a GNN be also effective for graph transferring learning? We call it the transferable graph lottery ticket (GLT) hypothesis. Extensive experiments were conducted which demonstrate the superiority of our proposed sparsification method over UGS, and which empirically verified our transferable GLT hypothesis.

READ FULL TEXT
research
02/12/2021

A Unified Lottery Ticket Hypothesis for Graph Neural Networks

With graphs rapidly growing in size and deeper graph neural networks (GN...
research
09/18/2023

Efficient Low-Rank GNN Defense Against Structural Attacks

Graph Neural Networks (GNNs) have been shown to possess strong represent...
research
11/28/2022

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

Recent works have impressively demonstrated that there exists a subnetwo...
research
06/27/2023

Input-sensitive dense-sparse primitive compositions for GNN acceleration

Graph neural networks (GNN) have become an important class of neural net...
research
10/27/2021

Towards a Taxonomy of Graph Learning Datasets

Graph neural networks (GNNs) have attracted much attention due to their ...
research
07/13/2023

Extended Graph Assessment Metrics for Graph Neural Networks

When re-structuring patient cohorts into so-called population graphs, in...
research
09/12/2022

Graph Polynomial Convolution Models for Node Classification of Non-Homophilous Graphs

We investigate efficient learning from higher-order graph convolution an...

Please sign up or login with your details

Forgot password? Click here to reset