Non-stationary Reinforcement Learning without Prior Knowledge: An Optimal Black-box Approach

02/10/2021
by   Chen-Yu Wei, et al.
0

We propose a black-box reduction that turns a certain reinforcement learning algorithm with optimal regret in a (near-)stationary environment into another algorithm with optimal dynamic regret in a non-stationary environment, importantly without any prior knowledge on the degree of non-stationarity. By plugging different algorithms into our black-box, we provide a list of examples showing that our approach not only recovers recent results for (contextual) multi-armed bandits achieved by very specialized algorithms, but also significantly improves the state of the art for linear bandits, episodic MDPs, and infinite-horizon MDPs in various ways. Specifically, in most cases our algorithm achieves the optimal dynamic regret 𝒪(min{√(LT), Δ^1/3T^2/3}) where T is the number of rounds and L and Δ are the number and amount of changes of the world respectively, while previous works only obtain suboptimal bounds and/or require the knowledge of L and Δ.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset