On Intrinsic Dataset Properties for Adversarial Machine Learning

05/19/2020
by   Jeffrey Z. Pan, et al.
19

Deep neural networks (DNNs) have played a key role in a wide range of machine learning applications. However, DNN classifiers are vulnerable to human-imperceptible adversarial perturbations, which can cause them to misclassify inputs with high confidence. Thus, creating robust DNNs which can defend against malicious examples is critical in applications where security plays a major role. In this paper, we study the effect of intrinsic dataset properties on the performance of adversarial attack and defense methods, testing on five popular image classification datasets - MNIST, Fashion-MNIST, CIFAR10/CIFAR100, and ImageNet. We find that input size and image contrast play key roles in attack and defense success. Our discoveries highlight that dataset design and data preprocessing steps are important to boost the adversarial robustness of DNNs. To our best knowledge, this is the first comprehensive work that studies the effect of intrinsic dataset properties on adversarial machine learning.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 9

page 10

page 11

research
03/02/2021

A Survey On Universal Adversarial Attack

Deep neural networks (DNNs) have demonstrated remarkable performance for...
research
07/22/2021

Unsupervised Detection of Adversarial Examples with Model Explanations

Deep Neural Networks (DNNs) have shown remarkable performance in a diver...
research
05/20/2022

Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification

The idea of robustness is central and critical to modern statistical ana...
research
07/04/2020

On Connections between Regularizations for Improving DNN Robustness

This paper analyzes regularization terms proposed recently for improving...
research
08/31/2021

Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

Recently published attacks against deep neural networks (DNNs) have stre...
research
12/18/2017

When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

A significant threat to the recent, wide deployment of machine learning-...
research
01/21/2021

A Person Re-identification Data Augmentation Method with Adversarial Defense Effect

The security of the Person Re-identification(ReID) model plays a decisiv...

Please sign up or login with your details

Forgot password? Click here to reset