Efficient Secure Aggregation for Privacy-Preserving Federated Machine Learning

04/07/2023
by   Rouzbeh Behnia, et al.
0

Federated learning introduces a novel approach to training machine learning (ML) models on distributed data while preserving user's data privacy. This is done by distributing the model to clients to perform training on their local data and computing the final model at a central server. To prevent any data leakage from the local model updates, various works with focus on secure aggregation for privacy preserving federated learning have been proposed. Despite their merits, most of the existing protocols still incur high communication and computation overhead on the participating entities and might not be optimized to efficiently handle the large update vectors for ML models. In this paper, we present E-seaML, a novel secure aggregation protocol with high communication and computation efficiency. E-seaML only requires one round of communication in the aggregation phase and it is up to 318x and 1224x faster for the user and the server (respectively) as compared to its most efficient counterpart. E-seaML also allows for efficiently verifying the integrity of the final model by allowing the aggregation server to generate a proof of honest aggregation for the participating users. This high efficiency and versatility is achieved by extending (and weakening) the assumption of the existing works on the set of honest parties (i.e., users) to a set of assisting nodes. Therefore, we assume a set of assisting nodes which assist the aggregation server in the aggregation process. We also discuss, given the minimal computation and communication overhead on the assisting nodes, how one could assume a set of rotating users to as assisting nodes in each iteration. We provide the open-sourced implementation of E-seaML for public verifiability and testing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2021

Sparsified Secure Aggregation for Privacy-Preserving Federated Learning

Secure aggregation is a popular protocol in privacy-preserving federated...
research
04/05/2020

PrivFL: Practical Privacy-preserving Federated Regressions on High-dimensional Data over Mobile Networks

Federated Learning (FL) enables a large number of users to jointly learn...
research
02/20/2023

Byzantine-Resistant Secure Aggregation for Federated Learning Based on Coded Computing and Vector Commitment

In this paper, we propose an efficient secure aggregation scheme for fed...
research
07/28/2021

Secure Bayesian Federated Analytics for Privacy-Preserving Trend Detection

Federated analytics has many applications in edge computing, its use can...
research
06/18/2022

Fully Privacy-Preserving Federated Representation Learning via Secure Embedding Aggregation

We consider a federated representation learning framework, where with th...
research
09/26/2021

MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

Machine Learning (ML) has emerged as a core technology to provide learni...
research
10/14/2020

Privacy-Preserving Object Detection Localization Using Distributed Machine Learning: A Case Study of Infant Eyeblink Conditioning

Distributed machine learning is becoming a popular model-training method...

Please sign up or login with your details

Forgot password? Click here to reset