Preserving Differential Privacy in Convolutional Deep Belief Networks

06/25/2017
by   NhatHai Phan, et al.
0

The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing epsilon-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2017

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

In this paper, we focus on developing a novel mechanism to preserve diff...
research
03/23/2019

Preserving Differential Privacy in Adversarial Learning with Provable Robustness

In this paper, we aim to develop a novel mechanism to preserve different...
research
06/02/2019

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM)...
research
10/11/2021

Continual Learning with Differential Privacy

In this paper, we focus on preserving differential privacy (DP) in conti...
research
02/11/2021

On Deep Learning with Label Differential Privacy

In many machine learning applications, the training data can contain hig...
research
03/13/2016

Pufferfish Privacy Mechanisms for Correlated Data

Many modern databases include personal and sensitive correlated data, su...

Please sign up or login with your details

Forgot password? Click here to reset