Learning Safety Constraints from Demonstrations with Unknown Rewards

by   David Lindner, et al.

We propose Convex Constraint Learning for Reinforcement Learning (CoCoRL), a novel approach for inferring shared constraints in a Constrained Markov Decision Process (CMDP) from a set of safe demonstrations with possibly different reward functions. While previous work is limited to demonstrations with known rewards or fully known environment dynamics, CoCoRL can learn constraints from demonstrations with different unknown rewards without knowledge of the environment dynamics. CoCoRL constructs a convex safe set based on demonstrations, which provably guarantees safety even for potentially sub-optimal (but safe) demonstrations. For near-optimal demonstrations, CoCoRL converges to the true safe set with no policy regret. We evaluate CoCoRL in tabular environments and a continuous driving simulation with multiple constraints. CoCoRL learns constraints that lead to safe driving behavior and that can be transferred to different tasks and environments. In contrast, alternative methods based on Inverse Reinforcement Learning (IRL) often exhibit poor performance and learn unsafe policies.


page 2

page 28


Learning Constraints from Demonstrations

We extend the learning from demonstration paradigm by providing a method...

Constraint Inference in Control Tasks from Expert Demonstrations via Inverse Optimization

Inferring unknown constraints is a challenging and crucial problem in ma...

Learning Task Specifications from Demonstrations via the Principle of Maximum Causal Entropy

In many settings (e.g., robotics) demonstrations provide a natural way t...

Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments

It is quite challenging to ensure the safety of reinforcement learning (...

Uncertainty-Aware Constraint Learning for Adaptive Safe Motion Planning from Demonstrations

We present a method for learning to satisfy uncertain constraints from d...

Constrained episodic reinforcement learning in concave-convex and knapsack settings

We propose an algorithm for tabular episodic reinforcement learning with...

Bayesian Inverse Transition Learning for Offline Settings

Offline Reinforcement learning is commonly used for sequential decision-...

Please sign up or login with your details

Forgot password? Click here to reset