Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

06/10/2019
by   Kaidi Xu, et al.
4

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification. However, only few work has addressed the adversarial robustness of GNNs. In this paper, we first present a novel gradient-based attack method that facilitates the difficulty of tackling discrete graph data. When comparing to current adversarial attacks on GNNs, the results show that by only perturbing a small number of edge perturbations, including addition and deletion, our optimization-based attack can lead to a noticeable decrease in classification performance. Moreover, leveraging our gradient-based attack, we propose the first optimization-based adversarial training for GNNs. Our method yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2020

Adversarial Immunization for Improving Certifiable Robustness on Graphs

Despite achieving strong performance in the semi-supervised node classif...
research
09/22/2020

Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers

Graph neural networks (GNNs) have achieved high performance in analyzing...
research
08/29/2023

Everything Perturbed All at Once: Enabling Differentiable Graph Attacks

As powerful tools for representation learning on graphs, graph neural ne...
research
09/08/2020

Adversarial Attack on Large Scale Graph

Recent studies have shown that graph neural networks are vulnerable agai...
research
04/27/2022

SSR-GNNs: Stroke-based Sketch Representation with Graph Neural Networks

This paper follows cognitive studies to investigate a graph representati...
research
06/27/2023

Adversarial Training for Graph Neural Networks

Despite its success in the image domain, adversarial training does not (...
research
11/06/2022

Unlearning Nonlinear Graph Classifiers in the Limited Training Data Regime

As the demand for user privacy grows, controlled data removal (machine u...

Please sign up or login with your details

Forgot password? Click here to reset