Provable Robustness of Adversarial Training for Learning Halfspaces with Noise

04/19/2021
by   Difan Zou, et al.
8

We analyze the properties of adversarial training for learning adversarially robust halfspaces in the presence of agnostic label noise. Denoting 𝖮𝖯𝖳_p,r as the best robust classification error achieved by a halfspace that is robust to perturbations of ℓ_p balls of radius r, we show that adversarial training on the standard binary cross-entropy loss yields adversarially robust halfspaces up to (robust) classification error Õ(√(𝖮𝖯𝖳_2,r)) for p=2, and Õ(d^1/4√(𝖮𝖯𝖳_∞, r) + d^1/2𝖮𝖯𝖳_∞,r) when p=∞. Our results hold for distributions satisfying anti-concentration properties enjoyed by log-concave isotropic distributions among others. We additionally show that if one instead uses a nonconvex sigmoidal loss, adversarial training yields halfspaces with an improved robust classification error of O(𝖮𝖯𝖳_2,r) for p=2, and O(d^1/4𝖮𝖯𝖳_∞, r) when p=∞. To the best of our knowledge, this is the first work to show that adversarial training provably yields robust classifiers in the presence of noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset