Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks

09/05/2020
by   Nilaksh Das, et al.
0

Deep neural networks (DNNs) are now commonly used in many domains. However, they are vulnerable to adversarial attacks: carefully crafted perturbations on data inputs that can fool a model into making incorrect predictions. Despite significant research on developing DNN attack and defense techniques, people still lack an understanding of how such attacks penetrate a model's internals. We present Bluff, an interactive system for visualizing, characterizing, and deciphering adversarial attacks on vision-based neural networks. Bluff allows people to flexibly visualize and compare the activation pathways for benign and attacked images, revealing mechanisms that adversarial attacks employ to inflict harm on a model. Bluff is open-sourced and runs in modern web browsers.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
07/20/2020

Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks

Though deep neural networks (DNNs) have shown superiority over other tec...
research
11/04/2018

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

Deep Neural Networks (DNNs) have recently been shown vulnerable to adver...
research
01/21/2020

Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning

Deep neural networks (DNNs) are increasingly powering high-stakes applic...
research
08/12/2023

Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

Deep neural networks (DNNs) have gained prominence in various applicatio...
research
08/01/2023

Training on Foveated Images Improves Robustness to Adversarial Attacks

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
03/02/2018

Protecting JPEG Images Against Adversarial Attacks

As deep neural networks (DNNs) have been integrated into critical system...
research
03/28/2023

Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm

Adversarial attacks significantly threaten the robustness of deep neural...

Please sign up or login with your details

Forgot password? Click here to reset