A Survey of Risk-Aware Multi-Armed Bandits

05/12/2022
by   Vincent Y. F. Tan, et al.
0

In several applications such as clinical trials and financial portfolio optimization, the expected value (or the average reward) does not satisfactorily capture the merits of a drug or a portfolio. In such applications, risk plays a crucial role, and a risk-aware performance measure is preferable, so as to capture losses in the case of adverse events. This survey aims to consolidate and summarise the existing research on risk measures, specifically in the context of multi-armed bandits. We review various risk measures of interest, and comment on their properties. Next, we review existing concentration inequalities for various risk measures. Then, we proceed to defining risk-aware bandit problems, We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests, as well as the best-arm identification setting, which is a pure exploration problem – both in the context of risk-sensitive measures. We conclude by commenting on persisting challenges and fertile areas for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2019

Risk-aware Multi-armed Bandits Using Conditional Value-at-Risk

Traditional multi-armed bandit problems are geared towards finding the a...
research
09/15/2022

Risk-aware linear bandits with convex loss

In decision-making problems such as the multi-armed bandit, an agent lea...
research
04/30/2019

Risk-Averse Explore-Then-Commit Algorithms for Finite-Time Bandits

In this paper, we study multi-armed bandit problems in explore-then-comm...
research
10/10/2022

Towards an efficient and risk aware strategy for guiding farmers in identifying best crop management

Identification of best performing fertilizer practices among a set of co...
research
11/16/2020

Risk-Constrained Thompson Sampling for CVaR Bandits

The multi-armed bandit (MAB) problem is a ubiquitous decision-making pro...
research
02/24/2021

Continuous Mean-Covariance Bandits

Existing risk-aware multi-armed bandit models typically focus on risk me...
research
05/19/2022

Multi-Armed Bandits in Brain-Computer Interfaces

The multi-armed bandit (MAB) problem models a decision-maker that optimi...

Please sign up or login with your details

Forgot password? Click here to reset