Privacy-preserving Prediction

03/27/2018
by   Cynthia Dwork, et al.
0

Ensuring differential privacy of models learned from sensitive user data is an important goal that has been studied extensively in recent years. It is now known that for some basic learning problems, especially those involving high-dimensional data, producing an accurate private model requires much more data than learning without privacy. At the same time, in many applications it is not necessary to expose the model itself. Instead users may be allowed to query the prediction model on their inputs only through an appropriate interface. Here we formulate the problem of ensuring privacy of individual predictions and investigate the overheads required to achieve it in several standard models of classification and regression. We first describe a simple baseline approach based on training several models on disjoint subsets of data and using standard private aggregation techniques to predict. We show that this approach has nearly optimal sample complexity for (realizable) PAC learning of any class of Boolean functions. At the same time, without strong assumptions on the data distribution, the aggregation step introduces a substantial overhead. We demonstrate that this overhead can be avoided for the well-studied class of thresholds on a line and for a number of standard settings of convex regression. The analysis of our algorithm for learning thresholds relies crucially on strong generalization guarantees that we establish for all differentially private prediction algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2019

PAC learning with stable and private predictions

We study binary classification algorithms for which the prediction on an...
research
02/27/2019

Private Center Points and Learning of Halfspaces

We present a private learner for halfspaces over an arbitrary finite dom...
research
02/04/2020

Efficient, Noise-Tolerant, and Private Learning via Boosting

We introduce a simple framework for designing private boosting algorithm...
research
09/09/2019

Differentially Private Algorithms for Learning Mixtures of Separated Gaussians

Learning the parameters of a Gaussian mixtures models is a fundamental a...
research
09/18/2020

Private Reinforcement Learning with PAC and Regret Guarantees

Motivated by high-stakes decision-making domains like personalized medic...
research
06/15/2020

Differentially Private Median Forests for Regression and Classification

Random forests are a popular method for classification and regression du...
research
11/19/2018

How to Use Heuristics for Differential Privacy

We develop theory for using heuristics to solve computationally hard pro...

Please sign up or login with your details

Forgot password? Click here to reset