Flow-based Recurrent Belief State Learning for POMDPs

05/23/2022
by   Xiaoyu Chen, et al.
0

Partially Observable Markov Decision Process (POMDP) provides a principled and generic framework to model real world sequential decision making processes but yet remains unsolved, especially for high dimensional continuous space and unknown models. The main challenge lies in how to accurately obtain the belief state, which is the probability distribution over the unobservable environment states given historical information. Accurately calculating this belief state is a precondition for obtaining an optimal policy of POMDPs. Recent advances in deep learning techniques show great potential to learn good belief states. However, existing methods can only learn approximated distribution with limited flexibility. In this paper, we introduce the FlOw-based Recurrent BElief State model (FORBES), which incorporates normalizing flows into the variational inference to learn general continuous belief states for POMDPs. Furthermore, we show that the learned belief states can be plugged into downstream RL algorithms to improve performance. In experiments, we show that our methods successfully capture the complex belief states that enable multi-modal predictions as well as high quality reconstructions, and results on challenging visual-motor control tasks show that our method achieves superior performance and sample efficiency.

READ FULL TEXT

page 5

page 14

page 22

page 24

research
11/15/2018

Neural Predictive Belief Representations

Unsupervised representation learning has succeeded with excellent result...
research
02/17/2023

Utilization of domain knowledge to improve POMDP belief estimation

The partially observable Markov decision process (POMDP) framework is a ...
research
10/01/2018

Bayesian Policy Optimization for Model Uncertainty

Addressing uncertainty is critical for autonomous systems to robustly ad...
research
06/30/2011

Finding Approximate POMDP solutions Through Belief Compression

Standard value function approaches to finding policies for Partially Obs...
research
01/16/2013

Value-Directed Belief State Approximation for POMDPs

We consider the problem belief-state monitoring for the purposes of impl...
research
05/17/2019

Optimizing Sequential Medical Treatments with Auto-Encoding Heuristic Search in POMDPs

Health-related data is noisy and stochastic in implying the true physiol...
research
09/30/2011

Anytime Point-Based Approximations for Large POMDPs

The Partially Observable Markov Decision Process has long been recognize...

Please sign up or login with your details

Forgot password? Click here to reset