Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples

01/23/2020
by   Ya-guan Qian, et al.
0

Although deep neural networks (DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples. Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as a minimax point problem that minimizes the loss function and maximizes the perturbation. Therefore, powerful adversarial examples can effectively simulate perturbation maximization to solve the minimax problem. In paper, a novel method was proposed to generate more powerful adversarial examples for robust adversarial training. The main idea is to approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples. The experiment results show it can effectively improve the robust of DNNs trained with these powerful adversarial examples.

READ FULL TEXT
research
07/31/2020

TEAM: We Need More Powerful Adversarial Examples for DNNs

Although deep neural networks (DNNs) have achieved success in many appli...
research
11/28/2020

Generalized Adversarial Examples: Attacks and Defenses

Most of the works follow such definition of adversarial example that is ...
research
05/23/2022

Collaborative Adversarial Training

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
06/20/2022

Understanding Robust Learning through the Lens of Representation Similarities

Representation learning, i.e. the generation of representations useful f...
research
02/21/2022

Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

Adversarial examples, crafted by adding imperceptible perturbations to n...
research
08/31/2023

Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff

This paper addresses the tradeoff between standard accuracy on clean exa...
research
12/27/2021

Adversarial Attack for Asynchronous Event-based Data

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...

Please sign up or login with your details

Forgot password? Click here to reset