Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack

by   Haonan Yan, et al.

Public intelligent services enabled by machine learning algorithms are vulnerable to model extraction attacks that can steal confidential information of the learning models through public queries. Though there are some protection options such as differential privacy (DP) and monitoring, which are considered promising techniques to mitigate this attack, we still find that the vulnerability persists. In this paper, we propose an adaptive query-flooding parameter duplication (QPD) attack. The adversary can infer the model information with black-box access and no prior knowledge of any model parameters or training data via QPD. We also develop a defense strategy using DP called monitoring-based DP (MDP) against this new attack. In MDP, we first propose a novel real-time model extraction status assessment scheme called Monitor to evaluate the situation of the model. Then, we design a method to guide the differential privacy budget allocation called APBA adaptively. Finally, all DP-based defenses with MDP could dynamically adjust the amount of noise added in the model response according to the result from Monitor and effectively defends the QPD attack. Furthermore, we thoroughly evaluate and compare the QPD attack and MDP defense performance on real-world models with DP and monitoring protection.


page 1

page 2

page 3

page 4


Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism

Public intelligent services enabled by machine learning algorithms are v...

Non-Asymptotic Lower Bounds For Training Data Reconstruction

We investigate semantic guarantees of private learning algorithms for th...

Asymmetric Differential Privacy

Recently, differential privacy (DP) is getting attention as a privacy de...

Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack

Machine learning algorithms, when applied to sensitive data, pose a pote...

On the (Im)Possibility of Estimating Various Notions of Differential Privacy

We analyze to what extent final users can infer information about the le...

Increasing the Cost of Model Extraction with Calibrated Proof of Work

In model extraction attacks, adversaries can steal a machine learning mo...

Please sign up or login with your details

Forgot password? Click here to reset