A Comparison of Reward Functions in Q-Learning Applied to a Cart Position Problem

by   Amartya Mukherjee, et al.

Growing advancements in reinforcement learning has led to advancements in control theory. Reinforcement learning has effectively solved the inverted pendulum problem and more recently the double inverted pendulum problem. In reinforcement learning, our agents learn by interacting with the control system with the goal of maximizing rewards. In this paper, we explore three such reward functions in the cart position problem. This paper concludes that a discontinuous reward function that gives non-zero rewards to agents only if they are within a given distance from the desired position gives the best results.


page 1

page 2

page 3

page 4


Experience enrichment based task independent reward model

For most reinforcement learning approaches, the learning is performed by...

Reinforcement Learning with a Corrupted Reward Channel

No real-world reward function is perfect. Sensory errors and software bu...

Outcome-Driven Reinforcement Learning via Variational Inference

While reinforcement learning algorithms provide automated acquisition of...

Scalable agent alignment via reward modeling: a research direction

One obstacle to applying reinforcement learning algorithms to real-world...

RewardsOfSum: Exploring Reinforcement Learning Rewards for Summarisation

To date, most abstractive summarisation models have relied on variants o...

G-Learner and GIRL: Goal Based Wealth Management with Reinforcement Learning

We present a reinforcement learning approach to goal based wealth manage...

A Bandit Framework for Optimal Selection of Reinforcement Learning Agents

Deep Reinforcement Learning has been shown to be very successful in comp...

Please sign up or login with your details

Forgot password? Click here to reset