Risk-Sensitive and Robust Model-Based Reinforcement Learning and Planning

by   Marc Rigter, et al.

Many sequential decision-making problems that are currently automated, such as those in manufacturing or recommender systems, operate in an environment where there is either little uncertainty, or zero risk of catastrophe. As companies and researchers attempt to deploy autonomous systems in less constrained environments, it is increasingly important that we endow sequential decision-making algorithms with the ability to reason about uncertainty and risk. In this thesis, we will address both planning and reinforcement learning (RL) approaches to sequential decision-making. In the planning setting, it is assumed that a model of the environment is provided, and a policy is optimised within that model. Reinforcement learning relies upon extensive random exploration, and therefore usually requires a simulator in which to perform training. In many real-world domains, it is impossible to construct a perfectly accurate model or simulator. Therefore, the performance of any policy is inevitably uncertain due to the incomplete knowledge about the environment. Furthermore, in stochastic domains, the outcome of any given run is also uncertain due to the inherent randomness of the environment. These two sources of uncertainty are usually classified as epistemic, and aleatoric uncertainty, respectively. The over-arching goal of this thesis is to contribute to developing algorithms that mitigate both sources of uncertainty in sequential decision-making problems. We make a number of contributions towards this goal, with a focus on model-based algorithms...


page 1

page 2

page 3

page 4


One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based Offline Reinforcement Learning

Offline reinforcement learning (RL) is suitable for safety-critical doma...

Don't Treat the Symptom, Find the Cause! Efficient Artificial-Intelligence Methods for (Interactive) Debugging

In the modern world, we are permanently using, leveraging, interacting w...

Non-Deterministic Policies in Markovian Decision Processes

Markovian processes have long been used to model stochastic environments...

Better Safe than Sorry: Evidence Accumulation Allows for Safe Reinforcement Learning

In the real world, agents often have to operate in situations with incom...

Optimal sequential decision making with probabilistic digital twins

Digital twins are emerging in many industries, typically consisting of s...

Single-Trajectory Distributionally Robust Reinforcement Learning

As a framework for sequential decision-making, Reinforcement Learning (R...

Risk-aware Meta-level Decision Making for Exploration Under Uncertainty

Robotic exploration of unknown environments is fundamentally a problem o...

Please sign up or login with your details

Forgot password? Click here to reset