Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation

10/10/2022
by   Michael H. Lim, et al.
0

Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory bounding the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is to formally justify that optimality guarantees in a finite sample particle belief MDP (PB-MDP) approximation of a POMDP/belief MDP yields optimality guarantees in the original POMDP as well. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm of choice to a POMDP by solving the corresponding particle belief MDP approximation and preserve the convergence guarantees in the POMDP. Practically, this means additionally assuming access to the observation density model, and simply swapping out the state transition generative model with a particle filtering-based model, which only increases the computational complexity by a factor of 𝒪(C), with C the number of particles in a particle belief state. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.

READ FULL TEXT

page 21

page 22

research
10/10/2019

Sparse tree search optimality guarantees in POMDPs with continuous observation spaces

Partially observable Markov decision processes (POMDPs) with continuous ...
research
04/07/2018

Hindsight is Only 50/50: Unsuitability of MDP based Approximate POMDP Solvers for Multi-resolution Information Gathering

Partially Observable Markov Decision Processes (POMDPs) offer an elegant...
research
03/15/2012

A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation

Partially-Observable Markov Decision Processes (POMDPs) are typically so...
research
12/18/2020

Voronoi Progressive Widening: Efficient Online Solvers for Continuous Space MDPs and POMDPs with Provably Optimal Components

Markov decision processes (MDPs) and partially observable MDPs (POMDPs) ...
research
09/18/2017

POMCPOW: An online algorithm for POMDPs with continuous state, action, and observation spaces

Online solvers for partially observable Markov decision processes have b...
research
11/12/2021

Q-Learning for MDPs with General Spaces: Convergence and Near Optimality via Quantization under Weak Continuity

Reinforcement learning algorithms often require finiteness of state and ...
research
03/02/2022

Learning Efficiently Function Approximation for Contextual MDP

We study learning contextual MDPs using a function approximation for bot...

Please sign up or login with your details

Forgot password? Click here to reset