How To Backdoor Federated Learning

07/02/2018
by   Eugene Bagdasaryan, et al.
0

Federated learning enables multiple participants to jointly construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a predictive keyboard model without revealing what individual users type into their phones. We demonstrate that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features, or that a next-word predictor completes certain sentences with an attacker-chosen word. We design and evaluate a new "constrain-and-scale" model-poisoning methodology and show that it greatly outperforms data poisoning. An attacker selected just once, in a single round of federated learning, can cause the global model to reach 100 attack under different assumptions and attack scenarios for standard federated learning tasks. We also show how to evade anomaly detection-based defenses by incorporating the evasion into the loss function when training the attack model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset