Regret Analysis of a Markov Policy Gradient Algorithm for Multi-arm Bandits

07/20/2020
by   Denis Denisov, et al.
0

We consider a policy gradient algorithm applied to a finite-arm bandit problem with Bernoulli rewards. We allow learning rates to depend on the current state of the algorithm, rather than use a deterministic time-decreasing learning rate. The state of the algorithm forms a Markov chain on the probability simplex. We apply Foster-Lyapunov techniques to analyse the stability of this Markov chain. We prove that if learning rates are well chosen then the policy gradient algorithm is a transient Markov chain and the state of the chain converges on the optimal arm with logarithmic or poly-logarithmic regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset