How linear reinforcement affects Donsker's Theorem for empirical processes

05/25/2020
by   Jean Bertoin, et al.
0

A reinforcement algorithm introduced by H.A. Simon <cit.> produces a sequence of uniform random variables with memory as follows. At each step, with a fixed probability p∈(0,1), Û_n+1 is sampled uniformly from Û_1, ..., Û_n, and with complementary probability 1-p, Û_n+1 is a new independent uniform variable. The Glivenko-Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when p<1/2, and that a further rescaling is needed when p>1/2 and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro