Quantification of the Leakage in Federated Learning

by   Zhaorui Li, et al.

With the growing emphasis on users' privacy, federated learning has become more and more popular. Many architectures have been raised for a better security. Most architecture work on the assumption that data's gradient could not leak information. However, some work, recently, has shown such gradients may lead to leakage of the training data. In this paper, we discuss the leakage based on a federated approximated logistic regression model and show that such gradient's leakage could leak the complete training data if all elements of the inputs are either 0 or 1.


page 1

page 2

page 3

page 4


Parallel Distributed Logistic Regression for Vertical Federated Learning without Third-Party Coordinator

Federated Learning is a new distributed learning mechanism which allows ...

Privacy Leakage of Real-World Vertical Federated Learning

Federated learning enables mutually distrusting participants to collabor...

A Quantitative Metric for Privacy Leakage in Federated Learning

In the federated learning system, parameter gradients are shared among p...

Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification

Federated learning of deep learning models for supervised tasks, e.g. im...

Bayesian Framework for Gradient Leakage

Federated learning is an established method for training machine learnin...

Speech Privacy Leakage from Shared Gradients in Distributed Learning

Distributed machine learning paradigms, such as federated learning, have...

Please sign up or login with your details

Forgot password? Click here to reset