Dynamic Weights in Multi-Objective Deep Reinforcement Learning

by   Axel Abels, et al.

Many real-world decision problems are characterized by multiple objectives which must be balanced based on their relative importance. In the dynamic weights setting this relative importance changes over time, as recognized by Natarajan and Tadepalli (2005) who proposed a tabular Reinforcement Learning algorithm to deal with this problem. However, this earlier work is not feasible for reinforcement learning settings in which the input is high-dimensional, necessitating the use of function approximators, such as neural networks. We propose two novel methods for multi-objective RL with dynamic weights, a multi-network approach and a single-network approach that conditions on the weights. Due to the inherent non-stationarity of the dynamic weights setting, standard experience replay techniques are insufficient. We therefore propose diverse experience replay, a framework to maintain a diverse set of experiences in the replay buffer, and show how it can be applied to make experience replay relevant in multi-objective RL. To evaluate the performance of our algorithms we introduce a new benchmark called the Minecart problem. We show empirically that our algorithms outperform more naive approaches. We also show that, while there are significant differences between many small changes in the weights opposed to sparse larger changes, the conditioned network with diverse experience replay consistently outperforms the other algorithms.


A Deeper Look at Experience Replay

Experience replay plays an important role in the success of deep reinfor...

Revisiting Fundamentals of Experience Replay

Experience replay is central to off-policy algorithms in deep reinforcem...

Behaviour-Diverse Automatic Penetration Testing: A Curiosity-Driven Multi-Objective Deep Reinforcement Learning Approach

Penetration Testing plays a critical role in evaluating the security of ...

Learning to Sample with Local and Global Contexts in Experience Replay Buffer

Experience replay, which enables the agents to remember and reuse experi...

Experience Replay with Likelihood-free Importance Weights

The use of past experiences to accelerate temporal difference (TD) learn...

Deep W-Networks: Solving Multi-Objective Optimisation Problems With Deep Reinforcement Learning

In this paper, we build on advances introduced by the Deep Q-Networks (D...

XCS Classifier System with Experience Replay

XCS constitutes the most deeply investigated classifier system today. It...

Please sign up or login with your details

Forgot password? Click here to reset