Markov Decision Processes with Long-Term Average Constraints

06/12/2021
by   Mridul Agarwal, et al.
0

We consider the problem of constrained Markov Decision Process (CMDP) where an agent interacts with a unichain Markov Decision Process. At every interaction, the agent obtains a reward. Further, there are K cost functions. The agent aims to maximize the long-term average reward while simultaneously keeping the K long-term average costs lower than a certain threshold. In this paper, we propose CMDP-PSRL, a posterior sampling based algorithm using which the agent can learn optimal policies to interact with the CMDP. Further, for MDP with S states, A actions, and diameter D, we prove that following CMDP-PSRL algorithm, the agent can bound the regret of not accumulating rewards from optimal policy by O(poly(DSA)√(T)). Further, we show that the violations for any of the K constraints is also bounded by O(poly(DSA)√(T)). To the best of our knowledge, this is the first work which obtains a O(√(T)) regret bounds for ergodic MDPs with long-term average constraints.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset