Universal, transferable and targeted adversarial attacks

08/29/2019
by   Junde Wu, et al.
22

Deep Neural Network has been found vulnerable in many previous works. A kind of well-designed inputs, which called adversarial examples, can lead the networks to make incorrect predictions. Depending on the different scenarios, requirements/goals and capabilities, the difficulty of the attack will be different. For example, targeted attack is more difficult than non-targeted attack. A universal attack is more difficult than a non-universal attack. A transferable attack is more difficult than a nontransferable one. The question is: Is there exist an attack that can survival in the most harsh environment to meet all these requirements. Although many cheap and effective attacks have been proposed, this question hasn't been fully answered over large models and large scale dataset. In this paper, we build a neural network to learn a universal mapping from the sources to the adversarial examples. These examples can fool classification networks into classifying all of them to one targeted class. Besides, they are also transferable between different models.

READ FULL TEXT

page 2

page 3

research
11/17/2020

Generating universal language adversarial examples by understanding and enhancing the transferability across neural models

Deep neural network models are vulnerable to adversarial attacks. In man...
research
12/19/2019

A New Ensemble Method for Concessively Targeted Multi-model Attack

It is well known that deep learning models are vulnerable to adversarial...
research
10/15/2019

Adversarial Examples for Models of Code

We introduce a novel approach for attacking trained models of code with ...
research
06/03/2021

A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks

Adversarial examples for neural network image classifiers are known to b...
research
11/19/2020

Multi-Task Adversarial Attack

Deep neural networks have achieved impressive performance in various are...
research
09/29/2018

CAAD 2018: Generating Transferable Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples, pert...
research
12/13/2022

Object-fabrication Targeted Attack for Object Detection

Recent researches show that the deep learning based object detection is ...

Please sign up or login with your details

Forgot password? Click here to reset