Learning under p-Tampering Attacks

11/10/2017
by   Saeed Mahloujifar, et al.
0

Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algorithms using a special case of Valiant's malicious noise, called p-tampering, in which the adversary could change training examples with independent probability p but only using correct labels. They showed the power of such attacks by increasing the error probability in the so called `targeted' poisoning model in which the adversary's goal is to increase the loss of the generated hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the average output of any bounded real-valued function through p-tampering. In this work, we present new attacks for biasing the average output of bounded real-valued functions, improving upon the biasing attacks of MM16. Our improved biasing attacks, directly imply improved p-tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis compared to previous attacks. We also study the possibility of PAC learning under p-tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary's goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p-tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under `no-mistake' adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such `bounded-budget' tampering attackers is inspired by the notions of (strong) adaptive corruption in secure multi-party computation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset