Learning to Control in Metric Space with Optimal Regret

05/05/2019
by   Lin F. Yang, et al.
0

We study online reinforcement learning for finite-horizon deterministic control systems with arbitrary state and action spaces. Suppose that the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We show that the regret of the algorithm after K episodes is O(HL(KH)^d-1/d) where L is a smoothness parameter, and d is the doubling dimension of the state-action space with respect to the given metric. We also establish a near-matching regret lower bound. The proposed method can be adapted to work for more structured transition systems, including the finite-state case and the case where value functions are linear combinations of features, where the method also achieve the optimal regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset