SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores

06/29/2023
by   Zhiyu Mei, et al.
0

The ever-growing complexity of reinforcement learning (RL) tasks demands a distributed RL system to efficiently generate and process a massive amount of data to train intelligent agents. However, existing open-source libraries suffer from various limitations, which impede their practical use in challenging scenarios where large-scale training is necessary. While industrial systems from OpenAI and DeepMind have achieved successful large-scale RL training, their system architecture and implementation details remain undisclosed to the community. In this paper, we present a novel abstraction on the dataflows of RL training, which unifies practical RL training across diverse applications into a general framework and enables fine-grained optimizations. Following this abstraction, we develop a scalable, efficient, and extensible distributed RL system called ReaLly Scalable RL (SRL). The system architecture of SRL separates major RL computation components and allows massively parallelized training. Moreover, SRL offers user-friendly and extensible interfaces for customized algorithms. Our evaluation shows that SRL outperforms existing academic libraries in both a single machine and a medium-sized cluster. In a large-scale cluster, the novel architecture of SRL leads to up to 3.7x speedup compared to the design choices adopted by the existing libraries. We also conduct a direct benchmark comparison to OpenAI's industrial system, Rapid, in the challenging hide-and-seek environment. SRL reproduces the same solution as reported by OpenAI with up to 5x speedup in wall-clock time. Furthermore, we also examine the performance of SRL in a much harder variant of the hide-and-seek environment and achieve substantial learning speedup by scaling SRL to over 15k CPU cores and 32 A100 GPUs. Notably, SRL is the first in the academic community to perform RL experiments at such a large scale.

READ FULL TEXT

page 6

page 11

research
03/29/2020

Sample Efficient Ensemble Learning with Catalyst.RL

We present Catalyst.RL, an open-source PyTorch framework for reproducibl...
research
06/16/2023

Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX

Open-source reinforcement learning (RL) environments have played a cruci...
research
06/06/2023

BackpropTools: A Fast, Portable Deep Reinforcement Learning Library for Continuous Control

Deep Reinforcement Learning (RL) has been demonstrated to yield capable ...
research
11/12/2018

Importance Weighted Evolution Strategies

Evolution Strategies (ES) emerged as a scalable alternative to popular R...
research
10/15/2021

SaLinA: Sequential Learning of Agents

SaLinA is a simple library that makes implementing complex sequential le...
research
06/13/2023

Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second

We present Galactic, a large-scale simulation and reinforcement-learning...
research
03/25/2020

Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods

Recent advances in machine learning are consistently enabled by increasi...

Please sign up or login with your details

Forgot password? Click here to reset