RoFL: Attestable Robustness for Secure Federated Learning

by   Lukas Burkhalter, et al.

Federated Learning is an emerging decentralized machine learning paradigm that allows a large number of clients to train a joint model without the need to share their private data. Participants instead only share ephemeral updates necessary to train the model. To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation; clients encrypt their gradient updates, and only the aggregated model is revealed to the server. Achieving this level of data protection, however, presents new challenges to the robustness of Federated Learning, i.e., the ability to tolerate failures and attacks. Unfortunately, in this setting, a malicious client can now easily exert influence on the model behavior without being detected. As Federated Learning is being deployed in practice in a range of sensitive applications, its robustness is growing in importance. In this paper, we take a step towards understanding and improving the robustness of secure Federated Learning. We start this paper with a systematic study that evaluates and analyzes existing attack vectors and discusses potential defenses and assesses their effectiveness. We then present RoFL, a secure Federated Learning system that improves robustness against malicious clients through input checks on the encrypted model updates. RoFL extends Federated Learning's secure aggregation protocol to allow expressing a variety of properties and constraints on model updates using zero-knowledge proofs. To enable RoFL to scale to typical Federated Learning settings, we introduce several ML and cryptographic optimizations specific to Federated Learning. We implement and evaluate a prototype of RoFL and show that realistic ML models can be trained in a reasonable time while improving robustness.


page 1

page 2

page 3

page 4


FedCIP: Federated Client Intellectual Property Protection with Traitor Tracking

Federated learning is an emerging privacy-preserving distributed machine...

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Federated learning allows a set of users to train a deep neural network ...

Secure Federated Submodel Learning

Federated learning was proposed with an intriguing vision of achieving c...

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

Malicious server (MS) attacks have enabled the scaling of data stealing ...

Incentivizing Honesty among Competitors in Collaborative Learning and Optimization

Collaborative learning techniques have the potential to enable training ...

Client-specific Property Inference against Secure Aggregation in Federated Learning

Federated learning has become a widely used paradigm for collaboratively...

Free-rider Attacks on Model Aggregation in Federated Learning

Free-rider attacks on federated learning consist in dissimulating partic...

Please sign up or login with your details

Forgot password? Click here to reset