Approximation Methods for Partially Observed Markov Decision Processes (POMDPs)

08/31/2021
by   Caleb M. Bowyer, et al.
0

POMDPs are useful models for systems where the true underlying state is not known completely to an outside observer; the outside observer incompletely knows the true state of the system, and observes a noisy version of the true system state. When the number of system states is large in a POMDP that often necessitates the use of approximation methods to obtain near optimal solutions for control. This survey is centered around the origins, theory, and approximations of finite-state POMDPs. In order to understand POMDPs, it is required to have an understanding of finite-state Markov Decision Processes (MDPs) in <ref> and Hidden Markov Models (HMMs) in <ref>. For this background theory, I provide only essential details on MDPs and HMMs and leave longer expositions to textbook treatments before diving into the main topics of POMDPs. Once the required background is covered, the POMDP is introduced in <ref>. The origins of the POMDP are explained in the classical papers section <ref>. Once the high computational requirements are understood from the exact methodological point of view, the main approximation methods are surveyed in <ref>. Then, I end the survey with some new research directions in <ref>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset