Provable robustness against all adversarial l_p-perturbations for p≥ 1

05/27/2019
by   Francesco Croce, et al.
0

In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific l_p-perturbation models have been developed, they are still vulnerable to other l_q-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt l_1- and l_∞-perturbations and show how that leads to provably robust models wrt any l_p-norm for p≥ 1.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset