Perturbational Complexity by Distribution Mismatch: A Systematic Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space

11/05/2021
by   Jihao Long, et al.
0

Most existing theoretical analysis of reinforcement learning (RL) is limited to the tabular setting or linear models due to the difficulty in dealing with function approximation in high dimensional space with an uncertain environment. This work offers a fresh perspective into this challenge by analyzing RL in a general reproducing kernel Hilbert space (RKHS). We consider a family of Markov decision processes ℳ of which the reward functions lie in the unit ball of an RKHS and transition probabilities lie in a given arbitrary set. We define a quantity called perturbational complexity by distribution mismatch Δ_ℳ(ϵ) to characterize the complexity of the admissible state-action distribution space in response to a perturbation in the RKHS with scale ϵ. We show that Δ_ℳ(ϵ) gives both the lower bound of the error of all possible algorithms and the upper bound of two specific algorithms (fitted reward and fitted Q-iteration) for the RL problem. Hence, the decay of Δ_ℳ(ϵ) with respect to ϵ measures the difficulty of the RL problem on ℳ. We further provide some concrete examples and discuss whether Δ_ℳ(ϵ) decays fast or not in these examples. As a byproduct, we show that when the reward functions lie in a high dimensional RKHS, even if the transition probability is known and the action space is finite, it is still possible for RL problems to suffer from the curse of dimensionality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2019

On Online Learning in Kernelized Markov Decision Processes

We develop algorithms with low regret for learning episodic Markov decis...
research
11/16/2020

No-Regret Reinforcement Learning with Value Function Approximation: a Kernel Embedding Approach

We consider the regret minimisation problem in reinforcement learning (R...
research
02/20/2023

Reinforcement Learning with Function Approximation: From Linear to Nonlinear

Function approximation has been an indispensable component in modern rei...
research
10/20/2022

Dynamic selection of p-norm in linear adaptive filtering via online kernel-based reinforcement learning

This study addresses the problem of selecting dynamically, at each time ...
research
04/15/2021

An L^2 Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation

Reinforcement learning (RL) algorithms based on high-dimensional functio...
research
10/21/2022

online and lightweight kernel-based approximated policy iteration for dynamic p-norm linear adaptive filtering

This paper introduces a solution to the problem of selecting dynamically...
research
06/22/2023

Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping

Reinforcement learning often needs to deal with the exponential growth o...

Please sign up or login with your details

Forgot password? Click here to reset