Identifying Adversarially Attackable and Robust Samples

01/30/2023
by   Vyas Raina, et al.
0

This work proposes a novel perspective on adversarial attacks by introducing the concept of sample attackability and robustness. Adversarial attacks insert small, imperceptible perturbations to the input that cause large, undesired changes to the output of deep learning models. Despite extensive research on generating adversarial attacks and building defense systems, there has been limited research on understanding adversarial attacks from an input-data perspective. We propose a deep-learning-based method for detecting the most attackable and robust samples in an unseen dataset for an unseen target model. The proposed method is based on a neural network architecture that takes as input a sample and outputs a measure of attackability or robustness. The proposed method is evaluated using a range of different models and different attack methods, and the results demonstrate its effectiveness in detecting the samples that are most likely to be affected by adversarial attacks. Understanding sample attackability can have important implications for future work in sample-selection tasks. For example in active learning, the acquisition function can be designed to select the most attackable samples, or in adversarial training, only the most attackable samples are selected for augmentation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2023

Sample Attackability in Natural Language Adversarial Attacks

Adversarial attack research in natural language processing (NLP) has mad...
research
06/18/2021

Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks

A common observation regarding adversarial attacks is that they mostly g...
research
06/25/2023

Computational Asymmetries in Robust Classification

In the context of adversarial robustness, we make three strongly related...
research
03/27/2023

Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection

As the use of machine learning continues to expand, the importance of en...
research
05/05/2022

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems

Adversarial attack perturbs an image with an imperceptible noise, leadin...
research
04/25/2022

A Simple Structure For Building A Robust Model

As deep learning applications, especially programs of computer vision, a...

Please sign up or login with your details

Forgot password? Click here to reset