Adversarially Robust Learning with Unknown Perturbation Sets

02/03/2021
by   Omar Montasser, et al.
0

We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset