DeepAI AI Chat
Log In Sign Up

On the Convergence of Reinforcement Learning

by   Suman Chakravorty, et al.

We consider the problem of Reinforcement Learning for nonlinear stochastic dynamical systems. We show that in the RL setting, there is an inherent "Curse of Variance" in addition to Bellman's infamous "Curse of Dimensionality", in particular, we show that the variance in the solution grows factorial-exponentially in the order of the approximation. A fundamental consequence is that this precludes the search for anything other than "local" feedback solutions in RL, in order to control the explosive variance growth, and thus, ensure accuracy. We further show that the deterministic optimal control has a perturbation structure, in that the higher order terms do not affect the calculation of lower order terms, which can be utilized in RL to get accurate local solutions.


page 1

page 2

page 3

page 4


Comparison of Reinforcement Learning algorithms applied to the Cart Pole problem

Designing optimal controllers continues to be challenging as systems are...

Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity

Greedy-GQ is a value-based reinforcement learning (RL) algorithm for opt...

Reinforcement Learning reveals fundamental limits on the mixing of active particles

The control of far-from-equilibrium physical systems, including active m...

On the Search for Feedback in Reinforcement Learning

This paper addresses the problem of learning the optimal feedback policy...

Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs

The goal of this paper is to make a strong point for the usage of dynami...

Reinforcement Learning with Analogical Similarity to Guide Schema Induction and Attention

Research in analogical reasoning suggests that higher-order cognitive fu...

Is High Variance Unavoidable in RL? A Case Study in Continuous Control

Reinforcement learning (RL) experiments have notoriously high variance, ...