Advancing Personalized Federated Learning: Group Privacy, Fairness, and Beyond

by   Filippo Galli, et al.

Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner. During training, a set of participating clients process their data stored locally, sharing only the model updates obtained by minimizing a cost function over their local inputs. FL was proposed as a stepping-stone towards privacy-preserving machine learning, but it has been shown vulnerable to issues such as leakage of private information, lack of personalization of the model, and the possibility of having a trained model that is fairer to some groups than to others. In this paper, we address the triadic interaction among personalization, privacy guarantees, and fairness attained by models trained within the FL framework. Differential privacy and its variants have been studied and applied as cutting-edge standards for providing formal privacy guarantees. However, clients in FL often hold very diverse datasets representing heterogeneous communities, making it important to protect their sensitive information while still ensuring that the trained model upholds the aspect of fairness for the users. To attain this objective, a method is put forth that introduces group privacy assurances through the utilization of d-privacy (aka metric privacy). d-privacy represents a localized form of differential privacy that relies on a metric-oriented obfuscation approach to maintain the original data's topological distribution. This method, besides enabling personalized model training in a federated approach and providing formal privacy guarantees, possesses significantly better group fairness measured under a variety of standard metrics than a global model trained within a classical FL template. Theoretical justifications for the applicability are provided, as well as experimental validation on real-world datasets to illustrate the working of the proposed method.


page 1

page 2

page 3

page 4


Group privacy for personalized federated learning

Federated learning is a type of collaborative machine learning, where pa...

Fairness and Privacy-Preserving in Federated Learning: A Survey

Federated learning (FL) as distributed machine learning has gained popul...

Handling Group Fairness in Federated Learning Using Augmented Lagrangian Approach

Federated learning (FL) has garnered considerable attention due to its p...

FedVal: Different good or different bad in federated learning

Federated learning (FL) systems are susceptible to attacks from maliciou...

Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings

In several practical applications of federated learning (FL), the client...

CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning

Federated Learning (FL) is a setting for training machine learning model...

Federated Learning With Highly Imbalanced Audio Data

Federated learning (FL) is a privacy-preserving machine learning method ...

Please sign up or login with your details

Forgot password? Click here to reset