Structured Evolution with Compact Architectures for Scalable Policy Optimization

04/06/2018
by   Krzysztof Choromanski, et al.
0

We present a new method of blackbox optimization via gradient approximation with the use of structured random orthogonal matrices, providing more accurate estimators than baselines and with provable theoretical guarantees. We show that this algorithm can be successfully applied to learn better quality compact policies than those using standard gradient estimation techniques. The compact policies we learn have several advantages over unstructured ones, including faster training algorithms and faster inference. These benefits are important when the policy is deployed on real hardware with limited resources. Further, compact policies provide more scalable architectures for derivative-free optimization (DFO) in high-dimensional spaces. We show that most robotics tasks from the OpenAI Gym can be solved using neural networks with less than 300 parameters, with almost linear time complexity of the inference phase, with up to 13x fewer parameters relative to the Evolution Strategies (ES) algorithm introduced by Salimans et al. (2017). We do not need heuristics such as fitness shaping to learn good quality policies, resulting in a simple and theoretically motivated training mechanism.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2019

Reinforcement Learning with Chromatic Networks

We present a new algorithm for finding compact neural networks encoding ...
research
06/19/2020

An Ode to an ODE

We present a new paradigm for Neural ODE algorithms, calledODEtoODE, whe...
research
05/29/2019

Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization

In this paper, we consider derivative free optimization problems, where ...
research
05/20/2018

Optimizing Simulations with Noise-Tolerant Structured Exploration

We propose a simple drop-in noise-tolerant replacement for the standard ...
research
03/07/2019

When random search is not enough: Sample-Efficient and Noise-Robust Blackbox Optimization of RL Policies

Interest in derivative-free optimization (DFO) and "evolutionary strateg...
research
08/02/2022

Implicit Two-Tower Policies

We present a new class of structured reinforcement learning policy-archi...
research
05/29/2016

TripleSpin - a generic compact paradigm for fast machine learning computations

We present a generic compact computational framework relying on structur...

Please sign up or login with your details

Forgot password? Click here to reset