High-Dimensional Experimental Design and Kernel Bandits

by   Romain Camilleri, et al.

In recent years methods from optimal linear experimental design have been leveraged to obtain state of the art results for linear bandits. A design returned from an objective such as G-optimal design is actually a probability distribution over a pool of potential measurement vectors. Consequently, one nuisance of the approach is the task of converting this continuous probability distribution into a discrete assignment of N measurements. While sophisticated rounding techniques have been proposed, in d dimensions they require N to be at least d, d log(log(d)), or d^2 based on the sub-optimality of the solution. In this paper we are interested in settings where N may be much less than d, such as in experimental design in an RKHS where d may be effectively infinite. In this work, we propose a rounding procedure that frees N of any dependence on the dimension d, while achieving nearly the same performance guarantees of existing rounding procedures. We evaluate the procedure against a baseline that projects the problem to a lower dimensional space and performs rounding which requires N to just be at least a notion of the effective dimension. We also leverage our new approach in a new algorithm for kernelized bandits to obtain state of the art results for regret minimization and pure exploration. An advantage of our approach over existing UCB-like approaches is that our kernel bandit algorithms are also robust to model misspecification.


page 1

page 2

page 3

page 4


Experimental Design for Regret Minimization in Linear Bandits

In this paper we propose a novel experimental design-based algorithm to ...

Pure Exploration in Kernel and Neural Bandits

We study pure exploration in bandits, where the dimension of the feature...

Gamification of Pure Exploration for Linear Bandits

We investigate an active pure-exploration setting, that includes best-ar...

An Optimal Algorithm for Linear Bandits

We provide the first algorithm for online bandit linear optimization who...

An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits

In the contextual linear bandit setting, algorithms built on the optimis...

Online Learning with Vector Costs and Bandits with Knapsacks

We introduce online learning with vector costs () where in each time ste...

A sub-sampling algorithm preventing outliers

Nowadays, in many different fields, massive data are available and for s...

Please sign up or login with your details

Forgot password? Click here to reset