COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation

by   Jongmin Lee, et al.

We consider the offline constrained reinforcement learning (RL) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset. This problem setting is appealing in many real-world scenarios, where direct interaction with the environment is costly or risky, and where the resulting policy should comply with safety constraints. However, it is challenging to compute a policy that guarantees satisfying the cost constraints in the offline RL setting, since the off-policy evaluation inherently has an estimation error. In this paper, we present an offline constrained RL algorithm that optimizes the policy in the space of the stationary distribution. Our algorithm, COptiDICE, directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction. Experimental results show that COptiDICE attains better policies in terms of constraint satisfaction and return-maximization, outperforming baseline algorithms.


page 1

page 2

page 3

page 4


Constraints Penalized Q-Learning for Safe Offline Reinforcement Learning

We study the problem of safe offline reinforcement learning (RL), the go...

SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning

Offline safe RL is of great practical relevance for deploying agents in ...

Balancing Constraints and Rewards with Meta-Gradient D4PG

Deploying Reinforcement Learning (RL) agents to solve real-world applica...

Solving Constrained Reinforcement Learning through Augmented State and Reward Penalties

Constrained Reinforcement Learning has been employed to enforce safety c...

Neural-Progressive Hedging: Enforcing Constraints in Reinforcement Learning with Stochastic Programming

We propose a framework, called neural-progressive hedging (NP), that lev...

OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation

We consider the offline reinforcement learning (RL) setting where the ag...

Structural Return Maximization for Reinforcement Learning

Batch Reinforcement Learning (RL) algorithms attempt to choose a policy ...

Please sign up or login with your details

Forgot password? Click here to reset