Improved Algorithms for Conservative Exploration in Bandits

by   Evrard Garcelon, et al.

In many fields such as digital marketing, healthcare, finance, and robotics, it is common to have a well-tested and reliable baseline policy running in production (e.g., a recommender system). Nonetheless, the baseline policy is often suboptimal. In this case, it is desirable to deploy online learning algorithms (e.g., a multi-armed bandit algorithm) that interact with the system to learn a better/optimal policy under the constraint that during the learning process the performance is almost never worse than the performance of the baseline itself. In this paper, we study the conservative learning problem in the contextual linear bandit setting and introduce a novel algorithm, the Conservative Constrained LinUCB (CLUCB2). We derive regret bounds for CLUCB2 that match existing results and empirically show that it outperforms state-of-the-art conservative bandit algorithms in a number of synthetic and real-world problems. Finally, we consider a more realistic constraint where the performance is verified only at predefined checkpoints (instead of at every step) and show how this relaxed constraint favorably impacts the regret and empirical performance of CLUCB2.


page 1

page 2

page 3

page 4


A One-Size-Fits-All Solution to Conservative Bandit Problems

In this paper, we study a family of conservative bandit problems (CBPs) ...

Conservative Exploration using Interleaving

In many practical problems, a learning agent may want to learn the best ...

Conservative Contextual Linear Bandits

Safety is a desirable property that can immensely increase the applicabi...

A Unified Framework for Conservative Exploration

We study bandits and reinforcement learning (RL) subject to a conservati...

Safe Exploration for Optimizing Contextual Bandits

Contextual bandit problems are a natural fit for many information retrie...

Near-optimal Conservative Exploration in Reinforcement Learning under Episode-wise Constraints

This paper investigates conservative exploration in reinforcement learni...

Linear Jamming Bandits: Sample-Efficient Learning for Non-Coherent Digital Jamming

It has been shown (Amuru et al. 2015) that online learning algorithms ca...

Please sign up or login with your details

Forgot password? Click here to reset