Alleviating Privacy Attacks via Causal Learning

09/27/2019
by   Shruti Tople, et al.
0

Machine learning models, especially deep neural networks have been shown to reveal membership information of inputs in the training data. Such membership inference attacks are a serious privacy concern, for example, patients providing medical records to build a model that detects HIV would not want their identity to be leaked. Further, we show that the attack accuracy amplifies when the model is used to predict samples that come from a different distribution than the training set, which is often the case in real world applications. Therefore, we propose the use of causal learning approaches where a model learns the causal relationship between the input features and the outcome. Causal models are known to be invariant to the training distribution and hence generalize well to shifts between samples from the same distribution and across different distributions. First, we prove that models learned using causal structure provide stronger differential privacy guarantees than associational models under reasonable assumptions. Next, we show that causal models trained on sufficiently large samples are robust to membership inference attacks across different distributions of datasets and those trained on smaller sample sizes always have lower attack accuracy than corresponding associational models. Finally, we confirm our theoretical claims with experimental evaluation on 4 datasets with moderately complex Bayesian networks. We observe that neural network-based associational models exhibit up to 80 under different test distributions and sample sizes whereas causal models exhibit attack accuracy close to a random guess. Our results confirm the value of the generalizability of causal models in reducing susceptibility to privacy attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2023

Do Backdoors Assist Membership Inference Attacks?

When an adversary provides poison samples to a machine learning model, p...
research
09/18/2022

Membership Inference Attacks and Generalization: A Causal Perspective

Membership inference (MI) attacks highlight a privacy weakness in presen...
research
10/07/2021

The Connection between Out-of-Distribution Generalization and Privacy of ML Models

With the goal of generalizing to out-of-distribution (OOD) data, recent ...
research
06/27/2019

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

Membership inference (MI) attacks exploit a learned model's lack of gene...
research
09/15/2022

CLIPping Privacy: Identity Inference Attacks on Multi-Modal Machine Learning Models

As deep learning is now used in many real-world applications, research h...
research
01/24/2023

Membership Inference of Diffusion Models

Recent years have witnessed the tremendous success of diffusion models i...
research
02/02/2022

Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference

A surprising phenomenon in modern machine learning is the ability of a h...

Please sign up or login with your details

Forgot password? Click here to reset