Decision-Theoretic Planning: Structural Assumptions and Computational Leverage

05/27/2011
by   C. Boutilier, et al.
0

Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the functions used to describe state transitions and observations, and in the relationships among features used to describe states, actions, rewards, and observations. Specialized representations, and algorithms employing these representations, can achieve computational leverage by exploiting these various forms of structure. Certain AI techniques -- in particular those based on the use of structured, intensional representations -- can be viewed in this way. This paper surveys several types of representations for both classical and decision-theoretic planning problems, and planning algorithms that exploit these representations in a number of different ways to ease the computational burden of constructing policies or plans. It focuses primarily on abstraction, aggregation and decomposition techniques based on AI-style representations.

READ FULL TEXT
research
07/13/2020

Efficient Planning in Large MDPs with Weak Linear Function Approximation

Large-scale Markov decision processes (MDPs) require planning algorithms...
research
01/30/2013

Structured Reachability Analysis for Markov Decision Processes

Recent research in decision theoretic planning has focussed on making th...
research
02/14/2012

Symbolic Dynamic Programming for Discrete and Continuous State MDPs

Many real-world decision-theoretic planning problems can be naturally mo...
research
12/12/2012

Planning under Continuous Time and Resource Uncertainty: A Challenge for AI

We outline a class of problems, typical of Mars rover operations, that a...
research
02/06/2013

Correlated Action Effects in Decision Theoretic Regression

Much recent research in decision theoretic planning has adopted Markov d...
research
10/16/2012

A Theory of Goal-Oriented MDPs with Dead Ends

Stochastic Shortest Path (SSP) MDPs is a problem class widely studied in...
research
08/05/2022

Planning under periodic observations: bounds and bounding-based solutions

We study planning problems faced by robots operating in uncertain enviro...

Please sign up or login with your details

Forgot password? Click here to reset