CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks

by   Haotian Xue, et al.

Despite the recent advances of graph neural networks (GNNs) in modeling graph data, the training of GNNs on large datasets is notoriously hard due to the overfitting. Adversarial training, which augments data with the worst-case adversarial examples, has been widely demonstrated to improve model's robustness against adversarial attacks and generalization ability. However, while the previous adversarial training generally focuses on protecting GNNs from spiteful attacks, it remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem. In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i.e., the loss changes with respect to model weights and node features, respectively. We draw the conclusion that GNNs are prone to falling into sharp local minima in these two loss landscapes, where GNNs possess poor generalization performances. To tackle this problem, we construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately. Furthermore, we divide the training process into two stages: one conducting the standard cross-entropy minimization to ensure the quick convergence of GNN models, the other applying our alternating adversarial training to avoid falling into locally sharp minima. The extensive experiments demonstrate our CAP can generally improve the generalization performance of GNNs on a variety of benchmark graph datasets.


page 1

page 2

page 3

page 4


Robust Weight Perturbation for Adversarial Training

Overfitting widely exists in adversarial robust training of deep network...

Adversarial Weight Perturbation Improves Generalization in Graph Neural Network

A lot of theoretical and empirical evidence shows that the flatter local...

Sharpness-Aware Graph Collaborative Filtering

Graph Neural Networks (GNNs) have achieved impressive performance in col...

Spectral Adversarial Training for Robust Graph Neural Network

Recent studies demonstrate that Graph Neural Networks (GNNs) are vulnera...

Relating Adversarially Robust Generalization to Flat Minima

Adversarial training (AT) has become the de-facto standard to obtain mod...

Revisiting Robustness in Graph Machine Learning

Many works show that node-level predictions of Graph Neural Networks (GN...

The Split Matters: Flat Minima Methods for Improving the Performance of GNNs

When training a Neural Network, it is optimized using the available trai...

Please sign up or login with your details

Forgot password? Click here to reset