Guiding Evolutionary Strategies by Differentiable Robot Simulators

by   Vladislav Kurenkov, et al.

In recent years, Evolutionary Strategies were actively explored in robotic tasks for policy search as they provide a simpler alternative to reinforcement learning algorithms. However, this class of algorithms is often claimed to be extremely sample-inefficient. On the other hand, there is a growing interest in Differentiable Robot Simulators (DRS) as they potentially can find successful policies with only a handful of trajectories. But the resulting gradient is not always useful for the first-order optimization. In this work, we demonstrate how DRS gradient can be used in conjunction with Evolutionary Strategies. Preliminary results suggest that this combination can reduce sample complexity of Evolutionary Strategies by 3x-5x times in both simulation and the real world.


page 3

page 4

page 7


Feedback is All You Need: Real-World Reinforcement Learning with Approximate Physics-Based Models

We focus on developing efficient and reliable policy optimization strate...

Behavior-based Neuroevolutionary Training in Reinforcement Learning

In addition to their undisputed success in solving classical optimizatio...

CoNES: Convex Natural Evolutionary Strategies

We present a novel algorithm – convex natural evolutionary strategies (C...

Shaped Policy Search for Evolutionary Strategies using Waypoints

In this paper, we try to improve exploration in Blackbox methods, partic...

Guided evolutionary strategies: escaping the curse of dimensionality in random search

Many applications in machine learning require optimizing a function whos...

Competitive Coevolution through Evolutionary Complexification

Two major goals in machine learning are the discovery and improvement of...

Efficacy of Modern Neuro-Evolutionary Strategies for Continuous Control Optimization

We analyze the efficacy of modern neuro-evolutionary strategies for cont...

Please sign up or login with your details

Forgot password? Click here to reset