Model-based Lookahead Reinforcement Learning

08/15/2019
by   Zhang-Wei Hong, et al.
0

Model-based Reinforcement Learning (MBRL) allows data-efficient learning which is required in real world applications such as robotics. However, despite the impressive data-efficiency, MBRL does not achieve the final performance of state-of-the-art Model-free Reinforcement Learning (MFRL) methods. We leverage the strengths of both realms and propose an approach that obtains high performance with a small amount of data. In particular, we combine MFRL and Model Predictive Control (MPC). While MFRL's strength in exploration allows us to train a better forward dynamics model for MPC, MPC improves the performance of the MFRL policy by sampling-based planning. The experimental results in standard continuous control benchmarks show that our approach can achieve MFRL`s level of performance while being as data-efficient as MBRL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2018

Differentiable MPC for End-to-end Planning and Control

We present foundations for using Model Predictive Control (MPC) as a dif...
research
06/16/2020

Model Embedding Model-Based Reinforcement Learning

Model-based reinforcement learning (MBRL) has shown its advantages in sa...
research
04/14/2023

Model Predictive Control with Self-supervised Representation Learning

Over the last few years, we have not seen any major developments in mode...
research
10/07/2021

Evaluating model-based planning and planner amortization for continuous control

There is a widespread intuition that model-based control methods should ...
research
12/31/2019

Information Theoretic Model Predictive Q-Learning

Model-free Reinforcement Learning (RL) algorithms work well in sequentia...
research
04/06/2021

Particle MPC for Uncertain and Learning-Based Control

As robotic systems move from highly structured environments to open worl...

Please sign up or login with your details

Forgot password? Click here to reset