Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning

by   Tengyang Xie, et al.

Recent theoretical work studies sample-efficient reinforcement learning (RL) extensively in two settings: learning interactively in the environment (online RL), or learning from an offline dataset (offline RL). However, existing algorithms and theories for learning near-optimal policies in these two settings are rather different and disconnected. Towards bridging this gap, this paper initiates the theoretical study of policy finetuning, that is, online RL where the learner has additional access to a "reference policy" μ close to the optimal policy π_⋆ in a certain sense. We consider the policy finetuning problem in episodic Markov Decision Processes (MDPs) with S states, A actions, and horizon length H. We first design a sharp offline reduction algorithm – which simply executes μ and runs offline policy optimization on the collected dataset – that finds an ε near-optimal policy within O(H^3SC^⋆/ε^2) episodes, where C^⋆ is the single-policy concentrability coefficient between μ and π_⋆. This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL. We then establish an Ω(H^3Smin{C^⋆, A}/ε^2) sample complexity lower bound for any policy finetuning algorithm, including those that can adaptively explore the environment. This implies that – perhaps surprisingly – the optimal policy finetuning algorithm is either offline reduction or a purely online RL algorithm that does not use μ. Finally, we design a new hybrid offline/online algorithm for policy finetuning that achieves better sample complexity than both vanilla offline reduction and purely online RL algorithms, in a relaxed setting where μ only satisfies concentrability partially up to a certain time step.


page 1

page 2

page 3

page 4


Near-Optimal Offline Reinforcement Learning via Double Variance Reduction

We consider the problem of offline reinforcement learning (RL) – a well-...

Leveraging Offline Data in Online Reinforcement Learning

Two central paradigms have emerged in the reinforcement learning (RL) co...

A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP

As an important framework for safe Reinforcement Learning, the Constrain...

Is Pessimism Provably Efficient for Offline RL?

We study offline reinforcement learning (RL), which aims to learn an opt...

AsyncQVI: Asynchronous-Parallel Q-Value Iteration for Reinforcement Learning with Near-Optimal Sample Complexity

In this paper, we propose AsyncQVI: Asynchronous-Parallel Q-value Iterat...

Corruption-Robust Offline Reinforcement Learning

We study the adversarial robustness in offline reinforcement learning. G...

Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data

Developing theoretical guarantees on the sample complexity of offline RL...

Please sign up or login with your details

Forgot password? Click here to reset