Instrument-Armed Bandits

05/21/2017
by   Nathan Kallus, et al.
0

We extend the classic multi-armed bandit (MAB) model to the setting of noncompliance, where the arm pull is a mere instrument and the treatment applied may differ from it, which gives rise to the instrument-armed bandit (IAB) problem. The IAB setting is relevant whenever the experimental units are human since free will, ethics, and the law may prohibit unrestricted or forced application of treatment. In particular, the setting is relevant in bandit models of dynamic clinical trials and other controlled trials on human interventions. Nonetheless, the setting has not been fully investigate in the bandit literature. We show that there are various and divergent notions of regret in this setting, all of which coincide only in the classic MAB setting. We characterize the behavior of these regrets and analyze standard MAB algorithms. We argue for a particular kind of regret that captures the causal effect of treatments but show that standard MAB algorithms cannot achieve sublinear control on this regret. Instead, we develop new algorithms for the IAB problem, prove new regret bounds for them, and compare them to standard MAB algorithms in numerical examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2014

Generalized Risk-Aversion in Stochastic Multi-Armed Bandits

We consider the problem of minimizing the regret in stochastic multi-arm...
research
03/04/2019

Learning Modular Safe Policies in the Bandit Setting with Application to Adaptive Clinical Trials

The stochastic multi-armed bandit problem is a well-known model for stud...
research
01/05/2022

Bridging Adversarial and Nonstationary Multi-armed Bandit

In the multi-armed bandit framework, there are two formulations that are...
research
11/14/2022

Hypothesis Transfer in Bandits by Weighted Models

We consider the problem of contextual multi-armed bandits in the setting...
research
01/04/2021

Etat de l'art sur l'application des bandits multi-bras

The Multi-armed bandit offer the advantage to learn and exploit the alre...
research
09/09/2021

Extreme Bandits using Robust Statistics

We consider a multi-armed bandit problem motivated by situations where o...
research
09/14/2018

Dueling Bandits with Qualitative Feedback

We formulate and study a novel multi-armed bandit problem called the qua...

Please sign up or login with your details

Forgot password? Click here to reset