Thompson Sampling for Gaussian Entropic Risk Bandits

05/14/2021
by   Ming Liang Ang, et al.
0

The multi-armed bandit (MAB) problem is a ubiquitous decision-making problem that exemplifies exploration-exploitation tradeoff. Standard formulations exclude risk in decision making. Risknotably complicates the basic reward-maximising objectives, in part because there is no universally agreed definition of it. In this paper, we consider an entropic risk (ER) measure and explore the performance of a Thompson sampling-based algorithm ERTS under this risk measure by providing regret bounds for ERTS and corresponding instance dependent lower bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset