No Discounted-Regret Learning in Adversarial Bandits with Delays

03/08/2021
by   Ilai Bistritz, et al.
0

Consider a player that in each round t out of T rounds chooses an action and observes the incurred cost after a delay of d_t rounds. The cost functions and the delay sequence are chosen by an adversary. We show that even if the players' algorithms lose their "no regret" property due to too large delays, the expected discounted ergodic distribution of play converges to the set of coarse correlated equilibrium (CCE) if the algorithms have "no discounted-regret". For a zero-sum game, we show that no discounted-regret is sufficient for the discounted ergodic average of play to converge to the set of Nash equilibria. We prove that the FKM algorithm with n dimensions achieves a regret of O(nT^3/4+√(n)T^1/3D^1/3) and the EXP3 algorithm with K arms achieves a regret of O(√(ln K(KT+D))) even when D=∑_t=1^Td_t and T are unknown. These bounds use a novel doubling trick that provably retains the regret bound for when D and T are known. Using these bounds, we show that EXP3 and FKM have no discounted-regret even for d_t=O(tlog t). Therefore, the CCE of a finite or convex unknown game can be approximated even when only delayed bandit feedback is available via simulation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset