Queueing Network Controls via Deep Reinforcement Learning

07/31/2020
by   J. G. Dai, et al.
0

Novel advanced policy gradient (APG) methods with conservative policy iterations, such as Trust Region policy optimization and Proximal policy optimization (PPO), have become the dominant reinforcement learning algorithms because of its ease of implementation and good practical performance. A conventional setup for queueing network control problems is a Markov decision problem (MDP) that has three features: infinite state space, unbounded costs and long-run average cost objective. We extend the theoretical justification for the use of APG methods in MDP problems with these three features. We show that each iteration the control policy parameters should be optimized within the trust region that prevents improper updates of the policy leading to the system instability and guarantees monotonic improvement. A critical challenge in queueing control optimization is the large number of samples typically required for relative value function estimation. We adopt discounting of the future costs and use a discounted relative value function as an approximation of the relative value function. We show that this discounted relative value function can be estimated via regenerative simulation. In addition, assuming the full knowledge of transition probabilities, we incorporate the approximating martingale-process (AMP) method into the regenerative estimator. We provide numerical results on parallel servers network and large-size multiclass queueing networks operating under heavy traffic regimes, learning policies that minimize the average number of jobs in the systems. The experiments demonstrate that the performance of control policies resulting from the proposed PPO algorithm outperforms other heuristics and is near-optimal when the optimal can be computed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset