Bayesian Q-learning With Imperfect Expert Demonstrations

by   Fengdi Che, et al.

Guided exploration with expert demonstrations improves data efficiency for reinforcement learning, but current algorithms often overuse expert information. We propose a novel algorithm to speed up Q-learning with the help of a limited amount of imperfect expert demonstrations. The algorithm avoids excessive reliance on expert data by relaxing the optimal expert assumption and gradually reducing the usage of uninformative expert data. Experimentally, we evaluate our approach on a sparse-reward chain environment and six more complicated Atari games with delayed rewards. With the proposed methods, we can achieve better results than Deep Q-learning from Demonstrations (Hester et al., 2017) in most environments.


page 1

page 2

page 3

page 4


Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance

In this paper, we study Reinforcement Learning from Demonstrations (RLfD...

Learning to control from expert demonstrations

In this paper, we revisit the problem of learning a stabilizing controll...

Expert Q-learning: Deep Q-learning With State Values From Expert Examples

We propose a novel algorithm named Expert Q-learning. Expert Q-learning ...

Hierarchical Deep Q-Network with Forgetting from Imperfect Demonstrations in Minecraft

We present hierarchical Deep Q-Network with Forgetting (HDQF) that took ...

Receding Horizon Inverse Reinforcement Learning

Inverse reinforcement learning (IRL) seeks to infer a cost function that...

Pretrain Soft Q-Learning with Imperfect Demonstrations

Pretraining reinforcement learning methods with demonstrations has been ...

Bayesian Experience Reuse for Learning from Multiple Demonstrators

Learning from demonstrations (LfD) improves the exploration efficiency o...

Please sign up or login with your details

Forgot password? Click here to reset