DeepAI AI Chat
Log In Sign Up

LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis

by   Fan Wu, et al.

Graph structured data have enabled several successful applications such as recommendation systems and traffic prediction, given the rich node features and edges information. However, these high-dimensional features and high-order adjacency information are usually heterogeneous and held by different data holders in practice. Given such vertical data partition (e.g., one data holder will only own either the node features or edge information), different data holders have to develop efficient joint training protocols rather than directly transfer data to each other due to privacy concerns. In this paper, we focus on the edge privacy, and consider a training scenario where Bob with node features will first send training node features to Alice who owns the adjacency information. Alice will then train a graph neural network (GNN) with the joint information and release an inference API. During inference, Bob is able to provide test node features and query the API to obtain the predictions for test nodes. Under this setting, we first propose a privacy attack LinkTeller via influence analysis to infer the private edge information held by Alice via designing adversarial queries for Bob. We then empirically show that LinkTeller is able to recover a significant amount of private edges, outperforming existing baselines. To further evaluate the privacy leakage, we adapt an existing algorithm for differentially private graph convolutional network (DP GCN) training and propose a new DP GCN mechanism LapGraph. We show that these DP GCN mechanisms are not always resilient against LinkTeller empirically under mild privacy guarantees (ε>5). Our studies will shed light on future research towards designing more resilient privacy-preserving GCN models; in the meantime, provide an in-depth understanding of the tradeoff between GCN model utility and robustness against potential privacy attacks.


page 1

page 12


GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation

Graph Neural Networks (GNNs) are powerful models designed for graph data...

LPGNet: Link Private Graph Networks for Node Classification

Classification tasks on labeled graph-structured data have many importan...

Node Injection Link Stealing Attack

In this paper, we present a stealthy and effective attack that exposes p...

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

Graph neural networks (GNNs) are susceptible to privacy inference attack...

Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank

Personalized PageRank (PPR) is a fundamental tool in unsupervised learni...

GraphMI: Extracting Private Graph Data from Graph Neural Networks

As machine learning becomes more widely used for critical applications, ...

Distributed Transition Systems with Tags for Privacy Analysis

We present a logical framework that formally models how a given private ...