A2: Extracting Cyclic Switchings from DOB-nets for Rejecting Excessive Disturbances

11/01/2019
by   Wenjie Lu, et al.
0

Reinforcement Learning (RL) is limited in practice by its gray-box nature, which is responsible for insufficient trustiness from users, unsatisfied interpretation for human intervention, inadequate analysis for future improvement, etc. This paper seeks to partially characterize the interplay between dynamical environments and the DOB-net. The DOB-net obtained from RL solves a set of Partially Observable Markovian Decision Processes (POMDPs). The transition function of each POMDP is largely determined by the environments, which are excessive external disturbances in this research. This paper proposes an Attention-based Abstraction (A^2) approach to extract a finite-state automaton, referred to as a Key Moore Machine Network (KMMN), to capture the switching mechanisms exhibited by the DOB-net in dealing with multiple such POMDPs. This approach first quantizes the controlled platform by learning continuous-discrete interfaces. Then it extracts the KMMN by finding the key hidden states and transitions that attract sufficient attention from the DOB-net. Within the resultant KMMN, this study found three patterns of cyclic switchings (between key hidden states), showing controls near their saturation are synchronized with unknown disturbances. Interestingly, the found switching mechanism has appeared previously in the design of hybrid control for often-saturated systems. It is further interpreted via an analogy to the discrete-event subsystem in the hybrid control.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset