DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing

03/02/2021
by   Wenxiao Wang, et al.
0

Deep learning techniques have achieved remarkable performance in wide-ranging tasks. However, when trained on privacy-sensitive datasets, the model parameters may expose private information in training data. Prior attempts for differentially private training, although offering rigorous privacy guarantees, lead to much lower model performance than the non-private ones. Besides, different runs of the same training algorithm produce models with large performance variance. To address these issues, we propose DPlis–Differentially Private Learning wIth Smoothing. The core idea of DPlis is to construct a smooth loss function that favors noise-resilient models lying in large flat regions of the loss landscape. We provide theoretical justification for the utility improvements of DPlis. Extensive experiments also demonstrate that DPlis can effectively boost model quality and training stability under a given privacy budget.

READ FULL TEXT

page 2

page 13

research
04/03/2019

Differentially Private Model Publishing for Deep Learning

Deep learning techniques based on neural networks have shown significant...
research
01/11/2022

Exponential Randomized Response: Boosting Utility in Differentially Private Selection

A differentially private selection algorithm outputs from a finite set t...
research
05/30/2019

Data-Dependent Differentially Private Parameter Learning for Directed Graphical Models

Directed graphical models (DGMs) are a class of probabilistic models tha...
research
09/27/2020

Differentially Private Adversarial Robustness Through Randomized Perturbations

Deep Neural Networks, despite their great success in diverse domains, ar...
research
02/22/2018

The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets

Machine learning models based on neural networks and deep learning are b...
research
11/12/2018

Boosting Model Performance through Differentially Private Model Aggregation

A key factor in developing high performing machine learning models is th...
research
02/19/2023

Why Is Public Pretraining Necessary for Private Model Training?

In the privacy-utility tradeoff of a model trained on benchmark language...

Please sign up or login with your details

Forgot password? Click here to reset