Differentially Private In-Context Learning

05/02/2023
by   Ashwinee Panda, et al.
1

An important question in deploying large language models (LLMs) is how to augment LLMs with private data. We propose Differentially Private In-context Learning (DP-ICL) to enable LLMs to adapt to new tasks while maintaining privacy guarantees. DP-ICL performs private inference by establishing noisy consensus over an ensemble of exemplars using the Report-Noisy-Max mechanism. We evaluate DP-ICL on four benchmarks and find that it achieves comparable performance (<2% degradation) with non-private ICL.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset