PAC-learning in the presence of evasion adversaries

06/05/2018
by   Daniel Cullina, et al.
0

The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. These attacks can be carried out by adding imperceptible perturbations to inputs to generate adversarial examples and finding effective defenses and detectors has proven to be difficult. In this paper, we step away from the attack-defense arms race and seek to understand the limits of what can be learned in the presence of an evasion adversary. In particular, we extend the Probably Approximately Correct (PAC)-learning framework to account for the presence of an adversary. We first define corrupted hypothesis classes which arise from standard binary hypothesis classes in the presence of an evasion adversary and derive the Vapnik-Chervonenkis (VC)-dimension for these, denoted as the adversarial VC-dimension. We then show that sample complexity upper bounds from the Fundamental Theorem of Statistical learning can be extended to the case of evasion adversaries, where the sample complexity is controlled by the adversarial VC-dimension. We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question. Finally, we prove that the adversarial VC-dimension can be either larger or smaller than the standard VC-dimension depending on the hypothesis class and adversary, making it an interesting object of study in its own right.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2019

Lower Bounds for Adversarially Robust PAC Learning

In this work, we initiate a formal study of probably approximately corre...
research
12/18/2019

Adversarial VC-dimension and Sample Complexity of Neural Networks

Adversarial attacks during the testing phase of neural networks pose a c...
research
02/10/2021

Adversarial Robustness: What fools you makes you stronger

We prove an exponential separation for the sample complexity between the...
research
05/12/2022

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

A fundamental problem in adversarial machine learning is to quantify how...
research
05/18/2021

Learning and Certification under Instance-targeted Poisoning

In this paper, we study PAC learnability and certification under instanc...
research
10/02/2018

Can Adversarially Robust Learning Leverage Computational Hardness?

Making learners robust to adversarial perturbation at test time (i.e., e...
research
11/10/2017

Learning under p-Tampering Attacks

Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algor...

Please sign up or login with your details

Forgot password? Click here to reset