PredCoin: Defense against Query-based Hard-label Attack

02/04/2021
by   Junfeng Guo, et al.
0

Many adversarial attacks and defenses have recently been proposed for Deep Neural Networks (DNNs). While most of them are in the white-box setting, which is impractical, a new class of query-based hard-label (QBHL) black-box attacks pose a significant threat to real-world applications (e.g., Google Cloud, Tencent API). Till now, there has been no generalizable and practical approach proposed to defend against such attacks. This paper proposes and evaluates PredCoin, a practical and generalizable method for providing robustness against QBHL attacks. PredCoin poisons the gradient estimation step, an essential component of most QBHL attacks. PredCoin successfully identifies gradient estimation queries crafted by an attacker and introduces uncertainty to the output. Extensive experiments show that PredCoin successfully defends against four state-of-the-art QBHL attacks across various settings and tasks while preserving the target model's overall accuracy. PredCoin is also shown to be robust and effective against several defense-aware attacks, which may have full knowledge regarding the internal mechanisms of PredCoin.

READ FULL TEXT

page 9

page 20

research
06/23/2020

RayS: A Ray Searching Method for Hard-label Adversarial Attack

Deep neural networks are vulnerable to adversarial attacks. Among differ...
research
06/24/2020

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
05/23/2019

Thwarting finite difference adversarial attacks with output randomization

Adversarial examples pose a threat to deep neural network models in a va...
research
08/12/2022

Unifying Gradients to Improve Real-world Robustness for Deep Networks

The wide application of deep neural networks (DNNs) demands an increasin...
research
03/26/2020

On the adversarial robustness of DNNs based on error correcting output codes

Adversarial examples represent a great security threat for deep learning...
research
10/01/2022

DeltaBound Attack: Efficient decision-based attack in low queries regime

Deep neural networks and other machine learning systems, despite being e...
research
10/26/2021

Disrupting Deep Uncertainty Estimation Without Harming Accuracy

Deep neural networks (DNNs) have proven to be powerful predictors and ar...

Please sign up or login with your details

Forgot password? Click here to reset