Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning

by   Samin Yeasar Arnob, et al.
McGill University

We hypothesize that empirically studying the sample complexity of offline reinforcement learning (RL) is crucial for the practical applications of RL in the real world. Several recent works have demonstrated the ability to learn policies directly from offline data. In this work, we ask the question of the dependency on the number of samples for learning from offline data. Our objective is to emphasize that studying sample complexity for offline RL is important, and is an indicator of the usefulness of existing offline algorithms. We propose an evaluation approach for sample complexity analysis of offline RL.


page 1

page 2

page 3

page 4


On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples

Offline reinforcement learning (offline RL) considers problems where lea...

Jump-Start Reinforcement Learning

Reinforcement learning (RL) provides a theoretical framework for continu...

A Brief Study of In-Domain Transfer and Learning from Fewer Samples using A Few Simple Priors

Domain knowledge can often be encoded in the structure of a network, suc...

Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning

Recent advances in the reinforcement learning (RL) literature have enabl...

Improving Offline RL by Blending Heuristics

We propose Heuristic Blending (HUBL), a simple performance-improving tec...

Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient

Offline reinforcement learning, which aims at optimizing sequential deci...

Marginalized Importance Sampling for Off-Environment Policy Evaluation

Reinforcement Learning (RL) methods are typically sample-inefficient, ma...

Please sign up or login with your details

Forgot password? Click here to reset