Privacy-preserving Stochastic Gradual Learning

09/30/2018
by   Bo Han, et al.
0

It is challenging for stochastic optimizations to handle large-scale sensitive data safely. Recently, Duchi et al. proposed private sampling strategy to solve privacy leakage in stochastic optimizations. However, this strategy leads to robustness degeneration, since this strategy is equal to the noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with the gradual curriculum learning (CL). Specifically, the noise injection leads to the issue of label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields "private but robust" updates of the primal variable on the private curriculum, namely an reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that, PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/28/2023

On the Tradeoff between Privacy Preservation and Byzantine-Robustness in Decentralized Learning

This paper jointly considers privacy preservation and Byzantine-robustne...
research
10/12/2021

Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

Learning often involves sensitive data and as such, privacy preserving e...
research
10/26/2018

Development and Analysis of Deterministic Privacy-Preserving Policies Using Non-Stochastic Information Theory

A non-stochastic privacy metric using non-stochastic information theory ...
research
05/30/2019

P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in Pathological Image Classification

Recently, deep convolutional neural networks (CNNs) have achieved great ...
research
09/16/2022

Privacy-Preserving Distributed Expectation Maximization for Gaussian Mixture Model using Subspace Perturbation

Privacy has become a major concern in machine learning. In fact, the fed...
research
04/29/2020

Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework

As the modern world becomes increasingly digitized and interconnected, d...

Please sign up or login with your details

Forgot password? Click here to reset