Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike

03/18/2022
by   Johannes Schneider, et al.
0

We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts. The original sample is shifted towards a target sample, yielding an adversarial sample, by using the modified activations to reconstruct the original sample. A human might (and possibly should) notice differences between the original and the adversarial sample. Depending on the attacker-provided constraints, an adversarial sample can exhibit subtle differences or appear like a "forged" sample from another class. Our approach and goal are in stark contrast to common attacks involving perturbations of single pixels that are not recognizable by humans. Our approach is relevant in, e.g., multi-stage processing of inputs, where both humans and machines are involved in decision-making because invisible perturbations will not fool a human. Our evaluation focuses on deep neural networks. We also show the transferability of our adversarial examples among networks.

READ FULL TEXT

page 4

page 5

research
02/21/2018

Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch

Deep neural networks (DNNs) have shown phenomenal success in a wide rang...
research
06/18/2022

Comment on Transferability and Input Transformation with Additive Noise

Adversarial attacks have verified the existence of the vulnerability of ...
research
01/30/2020

Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

Numerous recent studies have demonstrated how Deep Neural Network (DNN) ...
research
12/18/2019

Adversarial VC-dimension and Sample Complexity of Neural Networks

Adversarial attacks during the testing phase of neural networks pose a c...
research
08/23/2022

Transferability Ranking of Adversarial Examples

Adversarial examples can be used to maliciously and covertly change a mo...
research
04/26/2021

Impact of Spatial Frequency Based Constraints on Adversarial Robustness

Adversarial examples mainly exploit changes to input pixels to which hum...
research
04/20/2020

The Notorious Difficulty of Comparing Human and Machine Perception

With the rise of machines to human-level performance in complex recognit...

Please sign up or login with your details

Forgot password? Click here to reset