Exploiting Explanations for Model Inversion Attacks

by   Xuejun Zhao, et al.

The successful deployment of artificial intelligence (AI) in many domains from healthcare to hiring requires their responsible use, particularly in model explanations and privacy. Explainable artificial intelligence (XAI) provides more information to help users to understand model decisions, yet this additional knowledge exposes additional risks for privacy attacks. Hence, providing explanation harms privacy. We study this risk for image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations. We have developed several multi-modal transposed CNN architectures that achieve significantly higher inversion performance than using the target model prediction only. These XAI-aware inversion models were designed to exploit the spatial knowledge in image explanations. To understand which explanations have higher privacy risk, we analyzed how various explanation types and factors influence inversion performance. In spite of some models not providing explanations, we further demonstrate increased inversion performance even for non-explainable target models by exploiting explanations of surrogate models through attention transfer. This method first inverts an explanation from the target prediction, then reconstructs the target image. These threats highlight the urgent and significant privacy risks of explanations and calls attention for new privacy preservation techniques that balance the dual-requirement for AI explainability and privacy.


page 1

page 6

page 12

page 16

page 17

page 18


AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Decision Tree Models

Model extraction attack is one of the most prominent adversarial techniq...

"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI

There is broad agreement that Artificial Intelligence (AI) systems, part...

On the Relationship Between Explanation and Prediction: A Causal View

Explainability has become a central requirement for the development, dep...

Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem

Artificial Intelligence (AI) systems are increasingly used in high-stake...

Extending Explainable Boosting Machines to Scientific Image Data

As the deployment of computer vision technology becomes increasingly com...

Calibrated Explanations for Regression

Artificial Intelligence (AI) is often an integral part of modern decisio...

Can we Agree? On the Rashōmon Effect and the Reliability of Post-Hoc Explainable AI

The Rashōmon effect poses challenges for deriving reliable knowledge fro...

Please sign up or login with your details

Forgot password? Click here to reset