Adversarial Attack on Large Scale Graph

09/08/2020
by   Jintang Li, et al.
9

Recent studies have shown that graph neural networks are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Most works on attacking the graph neural networks are currently mainly using the gradient information to guide the attack and achieve outstanding performance. Nevertheless, the high complexity of time and space makes them unmanageable for large scale graphs. We argue that the main reason is that they have to use the entire graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) framework to bridge this gap. SGA can cause the graph neural networks to misclassify specific target nodes through a multi-stage optimized attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) for measuring the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world datasets by attacking several commonly used graph neural networks. The experimental results show that SGA is able to achieve significant time and memory efficiency improvements while maintaining considerable performance in the attack compared to other state-of-the-art methods of attack.

READ FULL TEXT
research
06/10/2019

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

Graph neural networks (GNNs) which apply the deep neural networks to gra...
research
04/22/2020

Scalable Attack on Graph Data by Injecting Vicious Nodes

Recent studies have shown that graph convolution networks (GCNs) are vul...
research
10/25/2022

Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

Graph neural network (GNN) with a powerful representation capability has...
research
09/22/2020

Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers

Graph neural networks (GNNs) have achieved high performance in analyzing...
research
08/15/2023

Simple and Efficient Partial Graph Adversarial Attack: A New Perspective

As the study of graph neural networks becomes more intensive and compreh...
research
08/21/2022

Revisiting Item Promotion in GNN-based Collaborative Filtering: A Masked Targeted Topological Attack Perspective

Graph neural networks (GNN) based collaborative filtering (CF) have attr...
research
02/20/2023

An Incremental Gray-box Physical Adversarial Attack on Neural Network Training

Neural networks have demonstrated remarkable success in learning and sol...

Please sign up or login with your details

Forgot password? Click here to reset