Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks

by   Jiawei Zhang, et al.
Nanjing University of Information Science and Technology
NetEase, Inc

Malicious intelligent algorithms greatly threaten the security of social users' privacy by detecting and analyzing the uploaded photos to social network platforms. The destruction to DNNs brought by the adversarial attack sparks the potential that adversarial examples serve as a new protection mechanism for privacy security in social networks. However, the existing adversarial example does not have recoverability for serving as an effective protection mechanism. To address this issue, we propose a recoverable generative adversarial network to generate self-recoverable adversarial examples. By modeling the adversarial attack and recovery as a united task, our method can minimize the error of the recovered examples while maximizing the attack ability, resulting in better recoverability of adversarial examples. To further boost the recoverability of these examples, we exploit a dimension reducer to optimize the distribution of adversarial perturbation. The experimental results prove that the adversarial examples generated by the proposed method present superior recoverability, attack ability, and robustness on different datasets and network architectures, which ensure its effectiveness as a protection mechanism in social networks.


page 1

page 2

page 3

page 4

page 10

page 13


Reversible Adversarial Examples based on Reversible Image Transformation

Recent studies show that widely used deep neural networks (DNNs) are vul...

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...

Reversible Adversarial Example based on Reversible Image Transformation

At present there are many companies that take the most advanced Deep Neu...

Towards Robust Deep Learning with Ensemble Networks and Noisy Layers

In this paper we provide an approach for deep learning that protects aga...

Toward Face Biometric De-identification using Adversarial Examples

The remarkable success of face recognition (FR) has endangered the priva...

Rethinking Adversarial Examples for Location Privacy Protection

We have investigated a new application of adversarial examples, namely l...

Benign Adversarial Attack: Tricking Algorithm for Goodness

In spite of the successful application in many fields, machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset