Private Federated Learning with Domain Adaptation

12/13/2019
by   Daniel Peterson, et al.
0

Federated Learning (FL) is a distributed machine learning (ML) paradigm that enables multiple parties to jointly re-train a shared model without sharing their data with any other parties, offering advantages in both scale and privacy. We propose a framework to augment this collaborative model-building with per-user domain adaptation. We show that this technique improves model accuracy for all users, using both real and synthetic data, and that this improvement is much more pronounced when differential privacy bounds are imposed on the FL model.

READ FULL TEXT
research
01/14/2021

Federated Learning: Opportunities and Challenges

Federated Learning (FL) is a concept first introduced by Google in 2016,...
research
12/04/2020

Mitigating Bias in Federated Learning

As methods to create discrimination-aware models develop, they focus on ...
research
03/30/2022

Federated Domain Adaptation for ASR with Full Self-Supervision

Cross-device federated learning (FL) protects user privacy by collaborat...
research
07/14/2023

Population Expansion for Training Language Models with Private Federated Learning

Federated learning (FL) combined with differential privacy (DP) offers m...
research
12/22/2022

Model Segmentation for Storage Efficient Private Federated Learning with Top r Sparsification

In federated learning (FL) with top r sparsification, millions of users ...
research
02/25/2022

Towards an Accountable and Reproducible Federated Learning: A FactSheets Approach

Federated Learning (FL) is a novel paradigm for the shared training of m...
research
07/16/2022

Sotto Voce: Federated Speech Recognition with Differential Privacy Guarantees

Speech data is expensive to collect, and incredibly sensitive to its sou...

Please sign up or login with your details

Forgot password? Click here to reset