Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks

12/22/2017
by   Siqi Yang, et al.
0

This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance system utilizing face detectors. We show that existing adversarial perturbation methods are not effective to perform such an attack, especially when there are multiple faces in the input image. This is because the adversarial perturbation specifically generated for one face may disrupt the adversarial perturbation for another face. In this paper, we call this problem the Instance Perturbation Interference (IPI) problem. This IPI problem is addressed by studying the relationship between the deep neural network receptive field and the adversarial perturbation. As such, we propose the Localized Instance Perturbation (LIP) that uses adversarial perturbation constrained to the Effective Receptive Field (ERF) of a target to perform the attack. Experiment results show the LIP method massively outperforms existing adversarial perturbation generation methods -- often by a factor of 2 to 10.

READ FULL TEXT

page 2

page 5

page 8

page 11

research
12/26/2020

Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many...
research
05/18/2023

A Comparative Study of Face Detection Algorithms for Masked Face Detection

Contemporary face detection algorithms have to deal with many challenges...
research
11/27/2017

Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation

This paper proposes a new algorithm for controlling classification resul...
research
05/21/2021

EMface: Detecting Hard Faces by Exploring Receptive Field Pyraminds

Scale variation is one of the most challenging problems in face detectio...
research
11/30/2019

Design and Interpretation of Universal Adversarial Patches in Face Detection

We consider universal adversarial patches for faces - small visual eleme...
research
06/17/2020

Disrupting Deepfakes with an Adversarial Attack that Survives Training

The rapid progress in generative models and autoencoders has given rise ...
research
07/27/2021

Resisting Out-of-Distribution Data Problem in Perturbation of XAI

With the rapid development of eXplainable Artificial Intelligence (XAI),...

Please sign up or login with your details

Forgot password? Click here to reset