ReFace: Real-time Adversarial Attacks on Face Recognition Systems

by   Shehzeen Hussain, et al.

Deep neural network based face recognition models have been shown to be vulnerable to adversarial examples. However, many of the past attacks require the adversary to solve an input-dependent optimization problem using gradient descent which makes the attack impractical in real-time. These adversarial examples are also tightly coupled to the attacked model and are not as successful in transferring to different models. In this work, we propose ReFace, a real-time, highly-transferable attack on face recognition models based on Adversarial Transformation Networks (ATNs). ATNs model adversarial example generation as a feed-forward neural network. We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets. We therefore propose a new architecture for ATNs that closes this gap while maintaining a 10000x speedup over PGD. Furthermore, we find that at a given perturbation magnitude, our ATN adversarial perturbations are more effective in transferring to new face recognition models than PGD. ReFace attacks can successfully deceive commercial face recognition services in a transfer attack setting and reduce face identification accuracy from 82 SearchFaces API and Azure face verification accuracy from 91


page 2

page 4

page 9

page 10


RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer

Face recognition has achieved considerable progress in recent years than...

Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation

Although face recognition starts to play an important role in our daily ...

Simulated Adversarial Testing of Face Recognition Models

Most machine learning models are validated and tested on fixed datasets....

Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack

Recently, significant advancements have been made in face recognition te...

Improving Transferability of Adversarial Examples on Face Recognition with Beneficial Perturbation Feature Augmentation

Face recognition (FR) models can be easily fooled by adversarial example...

FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems

We present FACESEC, a framework for fine-grained robustness evaluation o...

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...

Please sign up or login with your details

Forgot password? Click here to reset