Making Images Undiscoverable from Co-Saliency Detection

09/19/2020
by   Ruijun Gao, et al.
1

In recent years, co-saliency object detection (CoSOD) has achieved significant progress and played a key role in the retrieval-related tasks, e.g., image retrieval and video foreground detection. Nevertheless, it also inevitably posts a totally new safety and security problem, i.e., how to prevent high-profile and personal-sensitive contents from being extracted by the powerful CoSOD methods. In this paper, we address this problem from the perspective of adversarial attack and identify a novel task, i.e., adversarial co-saliency attack: given an image selected from an image group containing some common and salient objects, how to generate an adversarial version that can mislead CoSOD methods to predict incorrect co-salient regions. Note that, compared with general adversarial attacks for classification, this new task introduces two extra challenges for existing whitebox adversarial noise attacks: (1) low success rate due to the diverse appearance of images in the image group; (2) low transferability across CoSOD methods due to the considerable difference between CoSOD pipelines. To address these challenges, we propose the very first blackbox joint adversarial exposure noise attack (Jadena) where we jointly and locally tune the exposure and additive perturbations of the image according to a newly designed high-feature-level contrast-sensitive loss function. Our method, without any information of the state-of-the-art CoSOD methods, leads to significant performance degradation on various co-saliency detection datasets and make the co-salient objects undetectable, which can be strongly practical in nowadays where large-scale personal photos are shared on the internet and should be properly and securely preserved.

READ FULL TEXT

page 1

page 4

page 7

page 8

page 9

page 10

page 11

research
03/10/2020

SAD: Saliency-based Defenses Against Adversarial Examples

With the rise in popularity of machine and deep learning models, there i...
research
05/09/2019

ROSA: Robust Salient Object Detection against Adversarial Attacks

Recently salient object detection has witnessed remarkable improvement o...
research
04/02/2019

Adversarial Attacks against Deep Saliency Models

Currently, a plethora of saliency models based on deep neural networks h...
research
09/24/2017

Can Image Retrieval help Visual Saliency Detection?

We propose a novel image retrieval framework for visual saliency detecti...
research
11/30/2018

Transferable Adversarial Attacks for Image and Video Object Detection

Adversarial examples have been demonstrated to threaten many computer vi...
research
11/10/2021

Sparse Adversarial Video Attacks with Spatial Transformations

In recent years, a significant amount of research efforts concentrated o...
research
02/10/2020

ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

Deep neural networks are vulnerable to noise-based adversarial examples,...

Please sign up or login with your details

Forgot password? Click here to reset