DeepCAS: A Deep Reinforcement Learning Algorithm for Control-Aware Scheduling
We consider networked control systems consisting of multiple independent closed-loop control subsystems, operating over a shared communication network. Such systems are ubiquitous in cyber-physical systems, Internet of Things, and large-scale industrial systems. In many large-scale settings, the size of the communication network is smaller than the size of the system. In consequence, scheduling issues arise. The main contribution of this paper is to develop a deep reinforcement learning-based control-aware scheduling (DeepCAS) algorithm to tackle these issues. We use the following (optimal) design strategy: First, we synthesize an optimal controller for each subsystem; next, we design learning algorithm that adapts to the chosen subsystem (plant) and controller. As a consequence of this adaptation, our algorithm finds a schedule that minimizes the control loss. We present empirical results to show that DeepCAS finds schedules with better performance than periodic ones. Finally, we illustrate that our algorithm can be used for scheduling and resource allocation in more general networked control settings than the above-mentioned one.
READ FULL TEXT