Inferring Sensitive Attributes from Model Explanations

by   Vasisht Duddu, et al.

Model explanations provide transparency into a trained machine learning model's blackbox behavior to a model builder. They indicate the influence of different input attributes to its corresponding model prediction. The dependency of explanations on input raises privacy concerns for sensitive user data. However, current literature has limited discussion on privacy risks of model explanations. We focus on the specific privacy risk of attribute inference attack wherein an adversary infers sensitive attributes of an input (e.g., race and sex) given its model explanations. We design the first attribute inference attack against model explanations in two threat models where model builder either (a) includes the sensitive attributes in training data and input or (b) censors the sensitive attributes by not including them in the training data and input. We evaluate our proposed attack on four benchmark datasets and four state-of-the-art algorithms. We show that an adversary can successfully infer the value of sensitive attributes from explanations in both the threat models accurately. Moreover, the attack is successful even by exploiting only the explanations corresponding to sensitive attributes. These suggest that our attack is effective against explanations and poses a practical threat to data privacy. On combining the model predictions (an attack surface exploited by prior attacks) with explanations, we note that the attack success does not improve. Additionally, the attack success on exploiting model explanations is better compared to exploiting only model predictions. These suggest that model explanations are a strong attack surface to exploit for an adversary.


page 1

page 2

page 3

page 4


Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks

Machine learning (ML) models have been deployed for high-stakes applicat...

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...

Overlearning Reveals Sensitive Attributes

`Overlearning' means that a model trained for a seemingly simple objecti...

Quantifying Privacy Leakage in Graph Embedding

Graph embeddings have been proposed to map graph data to low dimensional...

Reliable Local Explanations for Machine Listening

One way to analyse the behaviour of machine learning models is through l...

Revisiting Methods for Finding Influential Examples

Several instance-based explainability methods for finding influential tr...

IPvSeeYou: Exploiting Leaked Identifiers in IPv6 for Street-Level Geolocation

We present IPvSeeYou, a privacy attack that permits a remote and unprivi...

Please sign up or login with your details

Forgot password? Click here to reset