Boosting Adversarial Attacks by Leveraging Decision Boundary Information

by   Boheng Zeng, et al.

Due to the gap between a substitute model and a victim model, the gradient-based noise generated from a substitute model may have low transferability for a victim model since their gradients are different. Inspired by the fact that the decision boundaries of different models do not differ much, we conduct experiments and discover that the gradients of different models are more similar on the decision boundary than in the original position. Moreover, since the decision boundary in the vicinity of an input image is flat along most directions, we conjecture that the boundary gradients can help find an effective direction to cross the decision boundary of the victim models. Based on it, we propose a Boundary Fitting Attack to improve transferability. Specifically, we introduce a method to obtain a set of boundary points and leverage the gradient information of these points to update the adversarial examples. Notably, our method can be combined with existing gradient-based methods. Extensive experiments prove the effectiveness of our method, i.e., improving the success rate by 5.6 and 14.9 transfer-based attacks. Further we compare transformers with CNNs, the results indicate that transformers are more robust than CNNs. However, our method still outperforms existing methods when attacking transformers. Specifically, when using CNNs as substitute models, our method obtains an average attack success rate of 58.2 attacks.


page 1

page 2

page 10


Towards Transferable Adversarial Attacks on Vision Transformers

Vision transformers (ViTs) have demonstrated impressive performance on a...

Improving the Transferability of Adversarial Examples via Direction Tuning

In the transfer-based adversarial attacks, adversarial examples are only...

Boosting Adversarial Transferability through Enhanced Momentum

Deep learning models are known to be vulnerable to adversarial examples ...

Boosting Adversarial Transferability with Learnable Patch-wise Masks

Adversarial examples have raised widespread attention in security-critic...

Efficient Algorithms for Boundary Defense with Heterogeneous Defenders

This paper studies the problem of defending (1D and 2D) boundaries again...

Improving Adversarial Transferability with Spatial Momentum

Deep Neural Networks (DNN) are vulnerable to adversarial examples. Altho...

Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization

Vision transformers (ViTs) have been successfully deployed in a variety ...

Please sign up or login with your details

Forgot password? Click here to reset