Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning

11/19/2022
by   Mingxuan Ju, et al.
0

Graph Neural Networks (GNNs) have drawn significant attentions over the years and been broadly applied to essential applications requiring solid robustness or vigorous security standards, such as product recommendation and user behavior modeling. Under these scenarios, exploiting GNN's vulnerabilities and further downgrading its performance become extremely incentive for adversaries. Previous attackers mainly focus on structural perturbations or node injections to the existing graphs, guided by gradients from the surrogate models. Although they deliver promising results, several limitations still exist. For the structural perturbation attack, to launch a proposed attack, adversaries need to manipulate the existing graph topology, which is impractical in most circumstances. Whereas for the node injection attack, though being more practical, current approaches require training surrogate models to simulate a white-box setting, which results in significant performance downgrade when the surrogate architecture diverges from the actual victim model. To bridge these gaps, in this paper, we study the problem of black-box node injection attack, without training a potentially misleading surrogate model. Specifically, we model the node injection attack as a Markov decision process and propose Gradient-free Graph Advantage Actor Critic, namely G2A2C, a reinforcement learning framework in the fashion of advantage actor critic. By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets, while maintaining a similar node feature distribution. Through our comprehensive experiments over eight acknowledged benchmark datasets with different characteristics, we demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers. Source code is publicly available at: https://github.com/jumxglhf/G2A2C.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2022

Black-box Node Injection Attack for Graph Neural Networks

Graph Neural Networks (GNNs) have drawn significant attentions over the ...
research
05/04/2023

Single Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning

Graph neural networks (GNNs) have achieved remarkable success in various...
research
08/30/2021

Single Node Injection Attack against Graph Neural Networks

Node injection attack on Graph Neural Networks (GNNs) is an emerging and...
research
09/14/2019

Node Injection Attacks on Graphs via Reinforcement Learning

Real-world graph applications, such as advertisements and product recomm...
research
06/17/2023

Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network

Federated Graph Neural Network (FedGNN) has recently emerged as a rapidl...
research
08/22/2023

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

Malicious domain detection (MDD) is an open security challenge that aims...

Please sign up or login with your details

Forgot password? Click here to reset