Secure and Privacy-Preserving Federated Learning via Co-Utility

by   Josep Domingo-Ferrer, et al.

The decentralized nature of federated learning, that often leverages the power of edge devices, makes it vulnerable to attacks against privacy and security. The privacy risk for a peer is that the model update she computes on her private data may, when sent to the model manager, leak information on those private data. Even more obvious are security attacks, whereby one or several malicious peers return wrong model updates in order to disrupt the learning process and lead to a wrong model being learned. In this paper we build a federated learning framework that offers privacy to the participating peers as well as security against Byzantine and poisoning attacks. Our framework consists of several protocols that provide strong privacy to the participating peers via unlinkable anonymity and that are rationally sustainable based on the co-utility property. In other words, no rational party is interested in deviating from the proposed protocols. We leverage the notion of co-utility to build a decentralized co-utile reputation management system that provides incentives for parties to adhere to the protocols. Unlike privacy protection via differential privacy, our approach preserves the values of model updates and hence the accuracy of plain federated learning; unlike privacy protection via update aggregation, our approach preserves the ability to detect bad model updates while substantially reducing the computational overhead compared to methods based on homomorphic encryption.


FedCIP: Federated Client Intellectual Property Protection with Traitor Tracking

Federated learning is an emerging privacy-preserving distributed machine...

Blinder: End-to-end Privacy Protection in Sensing Systems via Personalized Federated Learning

This paper proposes a sensor data anonymization model that is trained on...

No free lunch theorem for security and utility in federated learning

In a federated learning scenario where multiple parties jointly learn a ...

Privacy-Preserving Distributed Expectation Maximization for Gaussian Mixture Model using Subspace Perturbation

Privacy has become a major concern in machine learning. In fact, the fed...

zPROBE: Zero Peek Robustness Checks for Federated Learning

Privacy-preserving federated learning allows multiple users to jointly t...

Preliminary Steps Towards Federated Sentiment Classification

Automatically mining sentiment tendency contained in natural language is...

Federated Learning in Adversarial Settings

Federated Learning enables entities to collaboratively learn a shared pr...

Please sign up or login with your details

Forgot password? Click here to reset