Differentially Private Online Submodular Optimization

07/06/2018
by   Adrian Rivera Cardoso, et al.
0

In this paper we develop the first algorithms for online submodular minimization that preserve differential privacy under full information feedback and bandit feedback. A sequence of T submodular functions over a collection of n elements arrive online, and at each timestep the algorithm must choose a subset of [n] before seeing the function. The algorithm incurs a cost equal to the function evaluated on the chosen set, and seeks to choose a sequence of sets that achieves low expected regret. Our first result is in the full information setting, where the algorithm can observe the entire function after making its decision at each timestep. We give an algorithm in this setting that is ϵ-differentially private and achieves expected regret Õ(n^3/2√(T)/ϵ). This algorithm works by relaxing submodular function to a convex function using the Lovasz extension, and then simulating an algorithm for differentially private online convex optimization. Our second result is in the bandit setting, where the algorithm can only see the cost incurred by its chosen set, and does not have access to the entire function. This setting is significantly more challenging because the algorithm does not receive enough information to compute the Lovasz extension or its subgradients. Instead, we construct an unbiased estimate using a single-point estimation, and then simulate private online convex optimization using this estimate. Our algorithm using bandit feedback is ϵ-differentially private and achieves expected regret Õ(n^3/2T^3/4/ϵ).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

Differentially Private Online Submodular Maximization

In this work we consider the problem of online submodular maximization u...
research
12/22/2020

Projection-Free Bandit Optimization with Privacy Guarantees

We design differentially private algorithms for the bandit convex optimi...
research
10/12/2022

Differentially Private Online-to-Batch for Smooth Losses

We develop a new reduction that converts any online convex optimization ...
research
09/28/2018

Differentially Private Contextual Linear Bandits

We study the contextual linear bandit problem, a version of the standard...
research
05/15/2022

Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback

Motivated by applications to online learning in sparse estimation and Ba...
research
03/30/2020

How to Find a Point in the Convex Hull Privately

We study the question of how to compute a point in the convex hull of an...
research
06/22/2020

Differentially Private Convex Optimization with Feasibility Guarantees

This paper develops a novel differentially private framework to solve co...

Please sign up or login with your details

Forgot password? Click here to reset