Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation

05/24/2022
by   Xiaoguang Li, et al.
0

Local differential privacy (LDP) protects individual data contributors against privacy-probing data aggregation and analytics. Recent work has shown that LDP for some specific data types is vulnerable to data poisoning attacks, which enable the attacker to alter analytical results by injecting carefully-crafted bogus data. In this work, we focus on applying data poisoning attack to unexplored statistical tasks, i.e. mean and variance estimations. In contrast to prior work that aims for overall LDP performance degradation or straightforward attack gain maximization, our attacker can fine-tune the LDP estimated mean/variance to the desired target values and simultaneously manipulate them. To accomplish this goal, we propose two types of data poisoning attacks: input poisoning attack (IPA) and output poisoning attack (OPA). The former is independent of LDP while the latter utilizes the characteristics of LDP, thus being more effective. More intriguingly, we observe a security-privacy consistency where a small ϵ enhances the security of LDP contrary to the previous conclusion of a security-privacy trade-off. We further study the consistency and reveal a more holistic view of the threat landscape of LDP in the presence of data poisoning attacks. We comprehensively evaluate the attacks on three real-world datasets and report their effectiveness for achieving the target values. We also explore defense mechanisms and provide insights into the secure LDP design.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset