Model Generation with Provable Coverability for Offline Reinforcement Learning

06/01/2022
by   Chengxing Jia, et al.
0

Model-based offline optimization with dynamics-aware policy provides a new perspective for policy learning and out-of-distribution generalization, where the learned policy could adapt to different dynamics enumerated at the training stage. But due to the limitation under the offline setting, the learned model could not mimic real dynamics well enough to support reliable out-of-distribution exploration, which still hinders policy to generalize well. To narrow the gap, previous works roughly ensemble randomly initialized models to better approximate the real dynamics. However, such practice is costly and inefficient, and provides no guarantee on how well the real dynamics could be approximated by the learned models, which we name coverability in this paper. We actively address this issue by generating models with provable ability to cover real dynamics in an efficient and controllable way. To that end, we design a distance metric for dynamic models based on the occupancy of policies under the dynamics, and propose an algorithm to generate models optimizing their coverage for the real dynamics. We give a theoretical analysis on the model generation process and proves that our algorithm could provide enhanced coverability. As a downstream task, we train a dynamics-aware policy with minor or no conservative penalty, and experiments demonstrate that our algorithm outperforms prior offline methods on existing offline RL benchmarks. We also discover that policies learned by our method have better zero-shot transfer performance, implying their better generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2023

DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning

Model-based reinforcement learning (RL), which learns environment model ...
research
06/16/2022

Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination

The learned policy of model-free offline reinforcement learning (RL) met...
research
04/12/2021

Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment

Reinforcement learning from large-scale offline datasets provides us wit...
research
11/27/2022

Domain Generalization for Robust Model-Based Offline Reinforcement Learning

Existing offline reinforcement learning (RL) algorithms typically assume...
research
10/13/2022

Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief

Model-based offline reinforcement learning (RL) aims to find highly rewa...
research
09/15/2021

DROMO: Distributionally Robust Offline Model-based Policy Optimization

We consider the problem of offline reinforcement learning with model-bas...
research
05/22/2022

Offline Policy Comparison with Confidence: Benchmarks and Baselines

Decision makers often wish to use offline historical data to compare seq...

Please sign up or login with your details

Forgot password? Click here to reset