IOB: Integrating Optimization Transfer and Behavior Transfer for Multi-Policy Reuse

by   Siyuan Li, et al.
Tsinghua University
Harbin Institute of Technology
Washington University in St Louis

Humans have the ability to reuse previously learned policies to solve new tasks quickly, and reinforcement learning (RL) agents can do the same by transferring knowledge from source policies to a related target task. Transfer RL methods can reshape the policy optimization objective (optimization transfer) or influence the behavior policy (behavior transfer) using source policies. However, selecting the appropriate source policy with limited samples to guide target policy learning has been a challenge. Previous methods introduce additional components, such as hierarchical policies or estimations of source policies' value functions, which can lead to non-stationary policy optimization or heavy sampling costs, diminishing transfer effectiveness. To address this challenge, we propose a novel transfer RL method that selects the source policy without training extra components. Our method utilizes the Q function in the actor-critic framework to guide policy selection, choosing the source policy with the largest one-step improvement over the current target policy. We integrate optimization transfer and behavior transfer (IOB) by regularizing the learned policy to mimic the guidance policy and combining them as the behavior policy. This integration significantly enhances transfer effectiveness, surpasses state-of-the-art transfer RL baselines in benchmark tasks, and improves final performance and knowledge transferability in continual learning scenarios. Additionally, we show that our optimization transfer technique is guaranteed to improve target policy learning.


page 6

page 12

page 13

page 15

page 16

page 17

page 18


CUP: Critic-Guided Policy Reuse

The ability to reuse previous policies is an important aspect of human i...

Efficient Deep Reinforcement Learning through Policy Transfer

Transfer Learning (TL) has shown great potential to accelerate Reinforce...

Contextual Policy Reuse using Deep Mixture Models

Reinforcement learning methods that consider the context, or current sta...

Efficient Bayesian Policy Reuse with a Scalable Observation Model in Deep Reinforcement Learning

Bayesian policy reuse (BPR) is a general policy transfer framework for s...

REPAINT: Knowledge Transfer in Deep Actor-Critic Reinforcement Learning

Accelerating the learning processes for complex tasks by leveraging prev...

Lifetime policy reuse and the importance of task capacity

A long-standing challenge in artificial intelligence is lifelong learnin...

Please sign up or login with your details

Forgot password? Click here to reset