FSSA: Efficient 3-Round Secure Aggregation for Privacy-Preserving Federated Learning

by   Fucai Luo, et al.

Federated learning (FL) allows a large number of clients to collaboratively train machine learning (ML) models by sending only their local gradients to a central server for aggregation in each training iteration, without sending their raw training data. Unfortunately, recent attacks on FL demonstrate that local gradients may leak information about local training data. In response to such attacks, Bonawitz et al. (CCS 2017) proposed a secure aggregation protocol that allows a server to compute the sum of clients' local gradients in a secure manner. However, their secure aggregation protocol requires at least 4 rounds of communication between each client and the server in each training iteration. The number of communication rounds is closely related not only to the total communication cost but also the ML model accuracy, as the number of communication rounds affects client dropouts. In this paper, we propose FSSA, a 3-round secure aggregation protocol, that is efficient in terms of computation and communication, and resilient to client dropouts. We prove the security of FSSA in honest-but-curious setting and show that the security can be maintained even if an arbitrarily chosen subset of clients drop out at any time. We evaluate the performance of FSSA and show that its computation and communication overhead remains low even on large datasets. Furthermore, we conduct an experimental comparison between FSSA and Bonawitz et al.'s protocol. The comparison results show that, in addition to reducing the number of communication rounds, FSSA achieves a significant improvement in computational efficiency.


FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning

Recent attacks on federated learning demonstrate that keeping the traini...

Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning

This paper introduces Flamingo, a system for secure aggregation of data ...

SIMC 2.0: Improved Secure ML Inference Against Malicious Clients

In this paper, we study the problem of secure ML inference against a mal...

Communication-Efficient Cluster Federated Learning in Large-scale Peer-to-Peer Networks

A traditional federated learning (FL) allows clients to collaboratively ...

Practical and Light-weight Secure Aggregation for Federated Submodel Learning

Recently, Niu, et. al. introduced a new variant of Federated Learning (F...

Secure Decision Forest Evaluation

Decision forests are classical models to efficiently make decision on co...

Secure Computation of the kth-Ranked Element in a Star Network

We consider the problem of securely computing the kth-ranked element in ...

Please sign up or login with your details

Forgot password? Click here to reset