Reward-Mixing MDPs with a Few Latent Contexts are Learnable

10/05/2022
by   Jeongyeol Kwon, et al.
0

We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs): at the beginning of every episode nature randomly picks a latent reward model among M candidates and an agent interacts with the MDP throughout the episode for H time steps. Our goal is to learn a near-optimal policy that nearly maximizes the H time-step cumulative rewards in such a model. Previous work established an upper bound for RMMDPs for M=2. In this work, we resolve several open questions remained for the RMMDP model. For an arbitrary M≥2, we provide a sample-efficient algorithm–^2–that outputs an ϵ-optimal policy using Õ(ϵ^-2· S^d A^d ·(H, Z)^d ) episodes, where S, A are the number of states and actions respectively, H is the time-horizon, Z is the support size of reward distributions and d=min(2M-1,H). Our technique is a higher-order extension of the method-of-moments based approach, nevertheless, the design and analysis of the algorithm requires several new ideas beyond existing techniques. We also provide a lower bound of (SA)^Ω(√(M)) / ϵ^2 for a general instance of RMMDP, supporting that super-polynomial sample complexity in M is necessary.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset