Exploiting Submodular Value Functions For Scaling Up Active Perception

09/21/2020
by   Yash Satsangi, et al.
5

In active perception tasks, an agent aims to select sensory actions that reduce its uncertainty about one or more hidden variables. While partially observable Markov decision processes (POMDPs) provide a natural model for such problems, reward functions that directly penalize uncertainty in the agent's belief can remove the piecewise-linear and convex property of the value function required by most POMDP planners. Furthermore, as the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially with it, making POMDP planning infeasible with traditional methods. In this article, we address a twofold challenge of modeling and planning for active perception tasks. We show the mathematical equivalence of ρPOMDP and POMDP-IR, two frameworks for modeling active perception tasks, that restore the PWLC property of the value function. To efficiently plan for active perception tasks, we identify and exploit the independence properties of POMDP-IR to reduce the computational cost of solving POMDP-IR (and ρPOMDP). We propose greedy point-based value iteration (PBVI), a new POMDP planning method that uses greedy maximization to greatly improve scalability in the action space of an active perception POMDP. Furthermore, we show that, under certain conditions, including submodularity, the value function computed using greedy PBVI is guaranteed to have bounded error with respect to the optimal value function. We establish the conditions under which the value function of an active perception POMDP is guaranteed to be submodular. Finally, we present a detailed empirical analysis on a dataset collected from a multi-camera tracking system employed in a shopping mall. Our method achieves similar performance to existing methods but at a fraction of the computational cost leading to better scalability for solving active perception tasks.

READ FULL TEXT
research
09/10/2021

Simultaneous Perception-Action Design via Invariant Finite Belief Sets

Although perception is an increasingly dominant portion of the overall c...
research
10/04/2019

Online Active Perception for Partially Observable Markov Decision Processes with Limited Budget

Active perception strategies enable an agent to selectively gather infor...
research
10/22/2020

Multi-agent active perception with prediction rewards

Multi-agent active perception is a task where a team of agents cooperati...
research
01/14/2022

Adaptive Information Belief Space Planning

Reasoning about uncertainty is vital in many real-life autonomous system...
research
10/01/2015

Multimodal Hierarchical Dirichlet Process-based Active Perception

In this paper, we propose an active perception method for recognizing ob...
research
03/09/2022

Multi-Objective Multi-Agent Planning for Discovering and Tracking Unknown and Varying Number of Mobile Objects

We consider the online planning problem for a team of agents to discover...
research
02/25/2016

Probably Approximately Correct Greedy Maximization

Submodular function maximization finds application in a variety of real-...

Please sign up or login with your details

Forgot password? Click here to reset