A relaxed technical assumption for posterior sampling-based reinforcement learning for control of unknown linear systems
We revisit the Thompson sampling algorithm to control an unknown linear quadratic (LQ) system recently proposed by Ouyang et al (arXiv:1709.04047). The regret bound of the algorithm was derived under a technical assumption on the induced norm of the closed loop system. In this technical note, we show that by making a minor modification in the algorithm (in particular, ensuring that an episode does not end too soon), this technical assumption on the induced norm can be replaced by a milder assumption in terms of the spectral radius of the closed loop system. The modified algorithm has the same Bayesian regret of 𝒪̃(√(T)), where T is the time-horizon and the 𝒪̃(·) notation hides logarithmic terms in T.
READ FULL TEXT