Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case

11/26/2015
by   Joon Kwon, et al.
0

We demonstrate that, in the classical non-stochastic regret minimization problem with d decisions, gains and losses to be respectively maximized or minimized are fundamentally different. Indeed, by considering the additional sparsity assumption (at each stage, at most s decisions incur a nonzero outcome), we derive optimal regret bounds of different orders. Specifically, with gains, we obtain an optimal regret guarantee after T stages of order √(T s), so the classical dependency in the dimension is replaced by the sparsity size. With losses, we provide matching upper and lower bounds of order √(Ts(d)/d), which is decreasing in d. Eventually, we also study the bandit setting, and obtain an upper bound of order √(Ts (d/s)) when outcomes are losses. This bound is proven to be optimal up to the logarithmic factor √((d/s)).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2016

Refined Lower Bounds for Adversarial Bandits

We provide new lower bounds on the regret that must be suffered by adver...
research
07/02/2018

Adaptation to Easy Data in Prediction with Limited Advice

We derive an online learning algorithm with improved regret guarantees f...
research
09/22/2021

On Optimal Robustness to Adversarial Corruption in Online Decision Problems

This paper considers two fundamental sequential decision-making problems...
research
02/07/2021

Lazy OCO: Online Convex Optimization on a Switching Budget

We study a variant of online convex optimization where the player is per...
research
07/13/2019

Preselection Bandits under the Plackett-Luce Model

In this paper, we introduce the Preselection Bandit problem, in which th...
research
07/20/2020

Filtered Poisson Process Bandit on a Continuum

We consider a version of the continuum armed bandit where an action indu...
research
02/15/2012

Mirror Descent Meets Fixed Share (and feels no regret)

Mirror descent with an entropic regularizer is known to achieve shifting...

Please sign up or login with your details

Forgot password? Click here to reset