Interaction is necessary for distributed learning with privacy or communication constraints

by   Yuval Dagan, et al.

Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed in <cit.>. Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries.


page 1

page 2

page 3

page 4


Private Convex Optimization via Exponential Mechanism

In this paper, we study private optimization problems for non-smooth con...

Tight Lower Bounds for Locally Differentially Private Selection

We prove a tight lower bound (up to constant factors) on the sample comp...

On Distributed Differential Privacy and Counting Distinct Elements

We study the setup where each of n users holds an element from a discret...

Being Fast Means Being Chatty: The Local Information Cost of Graph Spanners

We introduce a new measure for quantifying the amount of information tha...

Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression

Recent advances in distributed optimization and learning have shown that...

Learning without Interaction Requires Separation

One of the key resources in large-scale learning systems is the number o...

Empirical Risk Minimization in the Non-interactive Local Model of Differential Privacy

In this paper, we study the Empirical Risk Minimization (ERM) problem in...

Please sign up or login with your details

Forgot password? Click here to reset