Finding Counterfactually Optimal Action Sequences in Continuous State Spaces

by   Stratis Tsirtsis, et al.

Humans performing tasks that involve taking a series of multiple dependent actions over time often learn from experience by reflecting on specific cases and points in time, where different actions could have led to significantly better outcomes. While recent machine learning methods to retrospectively analyze sequential decision making processes promise to aid decision makers in identifying such cases, they have focused on environments with finitely many discrete states. However, in many practical applications, the state of the environment is inherently continuous in nature. In this paper, we aim to fill this gap. We start by formally characterizing a sequence of discrete actions and continuous states using finite horizon Markov decision processes and a broad class of bijective structural causal models. Building upon this characterization, we formalize the problem of finding counterfactually optimal action sequences and show that, in general, we cannot expect to solve it in polynomial time. Then, we develop a search method based on the A^* algorithm that, under a natural form of Lipschitz continuity of the environment's dynamics, is guaranteed to return the optimal solution to the problem. Experiments on real clinical data show that our method is very efficient in practice, and it has the potential to offer interesting insights for sequential decision making tasks.


page 1

page 2

page 3

page 4


Counterfactual Explanations in Sequential Decision Making Under Uncertainty

Methods to find counterfactual explanations have predominantly focused o...

Personalized next-best action recommendation with multi-party interaction learning for automated decision-making

Automated next-best action recommendation for each customer in a sequent...

An Approximate Solution Method for Large Risk-Averse Markov Decision Processes

Stochastic domains often involve risk-averse decision makers. While rece...

Learning model-based planning from scratch

Conventional wisdom holds that model-based planning is a powerful approa...

Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization

In the sequential decision making setting, an agent aims to achieve syst...

Interval Markov Decision Processes with Continuous Action-Spaces

Interval Markov Decision Processes (IMDPs) are uncertain Markov models, ...

Tight Performance Guarantees of Imitator Policies with Continuous Actions

Behavioral Cloning (BC) aims at learning a policy that mimics the behavi...

Please sign up or login with your details

Forgot password? Click here to reset