Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition

by   Zexin Li, et al.

A hard challenge in developing practical face recognition (FR) attacks is due to the black-box nature of the target FR model, i.e., inaccessible gradient and parameter information to attackers. While recent research took an important step towards attacking black-box FR models through leveraging transferability, their performance is still limited, especially against online commercial FR systems that can be pessimistic (e.g., a less than 50 on average). Motivated by this, we present Sibling-Attack, a new FR attack technique for the first time explores a novel multi-task perspective (i.e., leveraging extra information from multi-correlated tasks to boost attacking transferability). Intuitively, Sibling-Attack selects a set of tasks correlated with FR and picks the Attribute Recognition (AR) task as the task used in Sibling-Attack based on theoretical and quantitative analysis. Sibling-Attack then develops an optimization framework that fuses adversarial gradient information through (1) constraining the cross-task features to be under the same space, (2) a joint-task meta optimization framework that enhances the gradient compatibility among tasks, and (3) a cross-task gradient stabilization method which mitigates the oscillation effect during attacking. Extensive experiments demonstrate that Sibling-Attack outperforms state-of-the-art FR attack techniques by a non-trivial margin, boosting ASR by 12.61 average on state-of-the-art pre-trained FR models and two well-known, widely used commercial FR systems.


page 7

page 8


RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer

Face recognition has achieved considerable progress in recent years than...

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

Deep neural networks, particularly face recognition models, have been sh...

Cross-Modal Transferable Adversarial Attacks from Images to Videos

Recent studies have shown that adversarial examples hand-crafted on one ...

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

Face recognition is a prevailing authentication solution in numerous bio...

Look Closer to Your Enemy: Learning to Attack via Teacher-student Mimicking

This paper aims to generate realistic attack samples of person re-identi...

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...

BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection

Graph-based Anomaly Detection (GAD) is becoming prevalent due to the pow...

Please sign up or login with your details

Forgot password? Click here to reset