The Curse of Memory in Stochastic Approximation: Extended Version

09/06/2023
by   Caio Kalil Lauand, et al.
0

Theory and application of stochastic approximation (SA) has grown within the control systems community since the earliest days of adaptive control. This paper takes a new look at the topic, motivated by recent results establishing remarkable performance of SA with (sufficiently small) constant step-size α>0. If averaging is implemented to obtain the final parameter estimate, then the estimates are asymptotically unbiased with nearly optimal asymptotic covariance. These results have been obtained for random linear SA recursions with i.i.d. coefficients. This paper obtains very different conclusions in the more common case of geometrically ergodic Markovian disturbance: (i) The target bias is identified, even in the case of non-linear SA, and is in general non-zero. The remaining results are established for linear SA recursions: (ii) the bivariate parameter-disturbance process is geometrically ergodic in a topological sense; (iii) the representation for bias has a simpler form in this case, and cannot be expected to be zero if there is multiplicative noise; (iv) the asymptotic covariance of the averaged parameters is within O(α) of optimal. The error term is identified, and may be massive if mean dynamics are not well conditioned. The theory is illustrated with application to TD-learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2020

On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration

We undertake a precise study of the asymptotic and non-asymptotic proper...
research
09/30/2020

Accelerating Optimization and Reinforcement Learning with Quasi-Stochastic Approximation

The ODE method has been a workhorse for algorithm design and analysis si...
research
11/29/2014

Constant Step Size Least-Mean-Square: Bias-Variance Trade-offs and Optimal Sampling Distributions

We consider the least-squares regression problem and provide a detailed ...
research
10/27/2021

The ODE Method for Asymptotic Statistics in Stochastic Approximation and Reinforcement Learning

The paper concerns convergence and asymptotic statistics for stochastic ...
research
03/23/2023

Adaptive step-size control for global approximation of SDEs driven by countably dimensional Wiener process

In this paper we deal with global approximation of solutions of stochast...
research
10/03/2022

Bias and Extrapolation in Markovian Linear Stochastic Approximation with Constant Stepsizes

We consider Linear Stochastic Approximation (LSA) with a constant stepsi...
research
06/23/2020

An efficient Averaged Stochastic Gauss-Newton algorithm for estimating parameters of non linear regressions models

Non linear regression models are a standard tool for modeling real pheno...

Please sign up or login with your details

Forgot password? Click here to reset