Learning in Markov Decision Processes under Constraints

by   Rahul Singh, et al.

We consider reinforcement learning (RL) in Markov Decision Processes (MDPs) in which at each time step the agent, in addition to earning a reward, also incurs an M dimensional vector of costs. The objective is to design a learning rule that maximizes the cumulative reward earned over a finite time horizon of T steps, while simultaneously ensuring that the cumulative cost expenditures are bounded appropriately. The considerations on the cumulative cost expenditures is in departure from the existing RL literature, in that the agent now additionally needs to balance the cost expenses in an online manner, while simultaneously performing optimally the exploration-exploitation trade-off typically encountered in RL tasks. This is challenging since either of the duo objectives of exploration and exploitation necessarily require the agent to expend resources. When the constraints are placed on the average costs, we present a version of UCB algorithm and prove that its reward as well as cost regrets are upper-bounded as O(T_MS√(ATlog(T))), where T_M is the mixing time of the MDP, S is the number of states, A is the number of actions, and T is the time horizon. We further show how to modify the algorithm in order to reduce regrets of a desired subset of the M costs, at the expense of increasing the regrets of rewards and the remaining costs. We then consider RL under the constraint that the vector comprising of the cumulative cost expenditures until each time t< T must be less than c^ubt. We propose a "finite (B)-state" algorithm and show that its average reward is within O(e^-B) of r^, the latter being the optimal average reward under average cost constraints.


page 1

page 2

page 3

page 4


Reinforcement Learning of Markov Decision Processes with Peak Constraints

In this paper, we consider reinforcement learning of Markov Decision Pro...

Reward-Mixing MDPs with a Few Latent Contexts are Learnable

We consider episodic reinforcement learning in reward-mixing Markov deci...

Reward Shaping via Diffusion Process in Reinforcement Learning

Reinforcement Learning (RL) models have continually evolved to navigate ...

Single-partition adaptive Q-learning

This paper introduces single-partition adaptive Q-learning (SPAQL), an a...

Optimal Nudging: Solving Average-Reward Semi-Markov Decision Processes as a Minimal Sequence of Cumulative Tasks

This paper describes a novel method to solve average-reward semi-Markov ...

Small Total-Cost Constraints in Contextual Bandits with Knapsacks, with Application to Fairness

We consider contextual bandit problems with knapsacks [CBwK], a problem ...

Mean-Variance Optimization in Markov Decision Processes

We consider finite horizon Markov decision processes under performance m...

Please sign up or login with your details

Forgot password? Click here to reset