Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation

06/17/2022
by   Wen Sun, et al.
4

Deep learning models are found to be vulnerable to adversarial examples, as wrong predictions can be caused by small perturbation in input for deep learning models. Most of the existing works of adversarial image generation try to achieve attacks for most models, while few of them make efforts on guaranteeing the perceptual quality of the adversarial examples. High quality adversarial examples matter for many applications, especially for the privacy preserving. In this work, we develop a framework based on the Minimum Noticeable Difference (MND) concept to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models. To achieve this, an adversarial loss is firstly proposed to make the deep learning models attacked by the adversarial images successfully. Then, a perceptual quality-preserving loss is developed by taking the magnitude of perturbation and perturbation-caused structural and gradient changes into account, which aims to preserve high perceptual quality for adversarial image generation. To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving. To evaluate its performance in terms of perceptual quality, the deep models on image classification and face recognition are tested with the proposed method and several anchor methods in this work. Extensive experimental results demonstrate that the proposed MND framework is capable of generating adversarial images with remarkably improved performance metrics (e.g., PSNR, SSIM, and MOS) than that generated with the anchor methods.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 10

page 12

research
07/25/2020

Adversarial Privacy-preserving Filter

While widely adopted in practical applications, face recognition has bee...
research
03/07/2019

Attack Type Agnostic Perceptual Enhancement of Adversarial Images

Adversarial images are samples that are intentionally modified to deceiv...
research
02/16/2021

Just Noticeable Difference for Machine Perception and Generation of Regularized Adversarial Images with Minimal Perturbation

In this study, we introduce a measure for machine perception, inspired b...
research
02/23/2021

Oriole: Thwarting Privacy against Trustworthy Deep Learning Models

Deep Neural Networks have achieved unprecedented success in the field of...
research
11/27/2020

SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images

The popularity of various social platforms has prompted more people to s...
research
05/05/2021

Perceptual Gradient Networks

Many applications of deep learning for image generation use perceptual l...
research
08/14/2020

Efficiently Constructing Adversarial Examples by Feature Watermarking

With the increasing attentions of deep learning models, attacks are also...

Please sign up or login with your details

Forgot password? Click here to reset