A Closer Look at the Calibration of Differentially Private Learners

10/15/2022
by   Hanlin Zhang, et al.
0

We systematically study the calibration of classifiers trained with differentially private stochastic gradient descent (DP-SGD) and observe miscalibration across a wide range of vision and language tasks. Our analysis identifies per-example gradient clipping in DP-SGD as a major cause of miscalibration, and we show that existing approaches for improving calibration with differential privacy only provide marginal improvements in calibration error while occasionally causing large degradations in accuracy. As a solution, we show that differentially private variants of post-processing calibration methods such as temperature scaling and Platt scaling are surprisingly effective and have negligible utility cost to the overall model. Across 7 tasks, temperature scaling and Platt scaling with DP-SGD result in an average 3.1-fold reduction in the in-domain expected calibration error and only incur at most a minor percent drop in accuracy.

READ FULL TEXT
research
07/09/2021

Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty

We show that differentially private stochastic gradient descent (DP-SGD)...
research
10/07/2022

Differentially Private Deep Learning with ModelMix

Training large neural networks with meaningful/usable differential priva...
research
08/03/2021

Large-Scale Differentially Private BERT

In this work, we study the large-scale pretraining of BERT-Large with di...
research
10/12/2021

Large Language Models Can Be Strong Differentially Private Learners

Differentially Private (DP) learning has seen limited success for buildi...
research
03/01/2022

Differentially private training of residual networks with scale normalisation

We investigate the optimal choice of replacement layer for Batch Normali...
research
06/12/2023

"Private Prediction Strikes Back!” Private Kernelized Nearest Neighbors with Individual Renyi Filter

Most existing approaches of differentially private (DP) machine learning...
research
12/15/2021

One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks

Preserving privacy in training modern NLP models comes at a cost. We kno...

Please sign up or login with your details

Forgot password? Click here to reset