Unlocking High-Accuracy Differentially Private Image Classification through Scale

04/28/2022
by   Soham De, et al.
0

Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP training method, realizes this protection by injecting noise during training. However previous works have found that DP-SGD often leads to a significant degradation in performance on standard image classification benchmarks. Furthermore, some authors have postulated that DP-SGD inherently performs poorly on large models, since the norm of the noise required to preserve privacy is proportional to the model dimension. In contrast, we demonstrate that DP-SGD on over-parameterized models can perform significantly better than previously thought. Combining careful hyper-parameter tuning with simple techniques to ensure signal propagation and improve the convergence rate, we obtain a new SOTA on CIFAR-10 of 81.4 10^-5)-DP using a 40-layer Wide-ResNet, improving over the previous SOTA of 71.7 achieve a remarkable 77.1 and achieve 81.1 SOTA of 47.9 results are a significant step towards closing the accuracy gap between private and non-private image classification.

READ FULL TEXT
research
05/06/2022

Large Scale Transfer Learning for Differentially Private Image Classification

Differential Privacy (DP) provides a formal framework for training machi...
research
06/08/2023

Differentially Private Image Classification by Learning Priors from Random Processes

In privacy-preserving machine learning, differentially private stochasti...
research
11/14/2022

SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing

Differential privacy (DP) provides a formal privacy guarantee that preve...
research
06/27/2023

Differentially Private Video Activity Recognition

In recent years, differential privacy has seen significant advancements ...
research
03/01/2022

Differentially private training of residual networks with scale normalisation

We investigate the optimal choice of replacement layer for Batch Normali...
research
05/24/2022

DPSNN: A Differentially Private Spiking Neural Network

Privacy-preserving is a key problem for the machine learning algorithm. ...
research
07/19/2023

The importance of feature preprocessing for differentially private linear optimization

Training machine learning models with differential privacy (DP) has rece...

Please sign up or login with your details

Forgot password? Click here to reset