Differentially Private In-Context Learning

05/02/2023
by   Ashwinee Panda, et al.
0

An important question in deploying large language models (LLMs) is how to augment LLMs with private data. We propose Differentially Private In-context Learning (DP-ICL) to enable LLMs to adapt to new tasks while maintaining privacy guarantees. DP-ICL performs private inference by establishing noisy consensus over an ensemble of exemplars using the Report-Noisy-Max mechanism. We evaluate DP-ICL on four benchmarks and find that it achieves comparable performance (<2% degradation) with non-private ICL.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/19/2020

Classical-Quantum Differentially Private Mechanisms Beyond Classical Ones

Let ε>0. An n-tuple (p_i)_i=1^n of probability vectors is called (classi...
03/02/2021

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

Data poisoning and backdoor attacks manipulate training data to induce s...
09/21/2023

Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation

We study the problem of in-context learning (ICL) with large language mo...
07/14/2022

Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank

Personalized PageRank (PPR) is a fundamental tool in unsupervised learni...
06/12/2023

"Private Prediction Strikes Back!” Private Kernelized Nearest Neighbors with Individual Renyi Filter

Most existing approaches of differentially private (DP) machine learning...
12/21/2022

Differentially Private Decentralized Optimization with Relay Communication

To address the privacy leakage problem in decentralized composite convex...

Please sign up or login with your details

Forgot password? Click here to reset