Training Production Language Models without Memorizing User Data

by   Swaroop Ramaswamy, et al.

This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique. There has been prior work on building practical FL infrastructure, including work demonstrating the feasibility of training language models on mobile devices using such infrastructure. It has also been shown (in simulations on a public corpus) that it is possible to train NWP models with user-level differential privacy using the DP-FedAvg algorithm. Nevertheless, training production-quality NWP models with DP-FedAvg in a real-world production environment on a heterogeneous fleet of mobile phones requires addressing numerous challenges. For instance, the coordinating central server has to keep track of the devices available at the start of each round and sample devices uniformly at random from them, while ensuring secrecy of the sample, etc. Unlike all prior privacy-focused FL work of which we are aware, for the first time we demonstrate the deployment of a differentially private mechanism for the training of a production neural network in FL, as well as the instrumentation of the production training infrastructure to perform an end-to-end empirical measurement of unintended memorization.


Can Public Large Language Models Help Private Cross-device Federated Learning?

We study (differentially) private federated learning (FL) of language mo...

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Differentially private federated learning (DP-FL) has received increasin...

Differentially Private Federated Learning on Heterogeneous Data

Federated Learning (FL) is a paradigm for large-scale distributed learni...

Differentially Private Federated Learning for Cancer Prediction

Since 2014, the NIH funded iDASH (integrating Data for Analysis, Anonymi...

FLAME: Differentially Private Federated Learning in the Shuffle Model

Differentially private federated learning has been intensively studied. ...

(Amplified) Banded Matrix Factorization: A unified approach to private training

Matrix factorization (MF) mechanisms for differential privacy (DP) have ...

Differentially Private Distributed Learning for Language Modeling Tasks

One of the big challenges in machine learning applications is that train...

Please sign up or login with your details

Forgot password? Click here to reset