Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization

by   Shixiang Shane Gu, et al.

The goal of continuous control is to synthesize desired behaviors. In reinforcement learning (RL)-driven approaches, this is often accomplished through careful task reward engineering for efficient exploration and running an off-the-shelf RL algorithm. While reward maximization is at the core of RL, reward engineering is not the only – sometimes nor the easiest – way for specifying complex behaviors. In this paper, we introduce , a toolkit for fast and interactive RL-driven behavior generation beyond simple reward maximization that includes Composer, a programmatic API for generating continuous control environments, and set of stable and well-tested baselines for two families of algorithms – mutual information maximization (MiMax) and divergence minimization (DMin) – supporting unsupervised skill learning and distribution sketching as other modes of behavior specification. In addition, we discuss how to standardize metrics for evaluating these algorithms, which can no longer rely on simple reward maximization. Our implementations build on a hardware-accelerated Brax simulator in Jax with minimal modifications, enabling behavior synthesis within minutes of training. We hope Braxlines can serve as an interactive toolkit for rapid creation and testing of environments and behaviors, empowering explosions of future benchmark designs and new modes of RL-driven behavior generation and their algorithmic research.


Evolving Rewards to Automate Reinforcement Learning

Many continuous control tasks have easily formulated objectives, yet usi...

Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning

Progress in deep reinforcement learning (RL) research is largely enabled...

Direct Behavior Specification via Constrained Reinforcement Learning

The standard formulation of Reinforcement Learning lacks a practical way...

NLPGym – A toolkit for evaluating RL agents on Natural Language Processing Tasks

Reinforcement learning (RL) has recently shown impressive performance in...

B-Pref: Benchmarking Preference-Based Reinforcement Learning

Reinforcement learning (RL) requires access to a reward function that in...

A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment

Empowerment is an information-theoretic method that can be used to intri...

Please sign up or login with your details

Forgot password? Click here to reset