Adversarial Attacks on Graph Neural Networks via Meta Learning

02/22/2019
by   Daniel Zügner, et al.
0

Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Deep learning methods for graphs achieve remarkable performance on many ...
research
05/21/2018

Adversarial Attacks on Neural Networks for Graph Data

Deep learning models for graphs have achieved strong performance for the...
research
10/31/2019

Certifiable Robustness to Graph Perturbations

Despite the exploding interest in graph neural networks there has been l...
research
12/07/2021

Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

In recent years, several results in the supervised learning setting sugg...
research
02/12/2020

Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models

Deep neural networks, while generalize well, are known to be sensitive t...
research
03/11/2019

Fisher-Bures Adversary Graph Convolutional Networks

In a graph convolutional network, we assume that the graph G is generate...
research
02/20/2019

Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure

Recent efforts show that neural networks are vulnerable to small but int...

Please sign up or login with your details

Forgot password? Click here to reset