Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics

08/13/2019
by   Saurabh Daptardar, et al.
4

Continuous control and planning remains a major challenge in robotics and machine learning. Neuroscience offers the possibility of learning from animal brains that implement highly successful controllers, but it is unclear how to relate an animal's behavior to control principles. Animals may not always act optimally from the perspective of an external observer, but may still act rationally: we hypothesize that animals choose actions with highest expected future subjective value according to their own internal model of the world. Their actions thus result from solving a different optimal control problem from those on which they are evaluated in neuroscience experiments. With this assumption, we propose a novel framework of model-based inverse rational control that learns the agent's internal model that best explains their actions in a task described as a partially observable Markov decision process (POMDP). In this approach we first learn optimal policies generalized over the entire model space of dynamics and subjective rewards, using an extended Kalman filter to represent the belief space, a neural network in the actor-critic framework to optimize the policy, and a simplified basis for the parameter space. We then compute the model that maximizes the likelihood of the experimentally observable data comprising the agent's sensory observations and chosen actions. Our proposed method is able to recover the true model of simulated agents within theoretical error bounds given by limited data. We illustrate this method by applying it to a complex naturalistic task currently used in neuroscience experiments. This approach provides a foundation for interpreting the behavioral and neural dynamics of highly adapted controllers in animal brains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2018

Inverse POMDP: Inferring What You Think from What You Do

Complex behaviors are often driven by an internal model, which integrate...
research
03/29/2023

Probabilistic inverse optimal control with local linearization for non-linear partially observable systems

Inverse optimal control methods can be used to characterize behavior in ...
research
02/02/2019

Belief dynamics extraction

Animal behavior is not driven simply by its current observations, but is...
research
11/07/2018

Computing the Value of Computation for Planning

An intelligent agent performs actions in order to achieve its goals. Suc...
research
05/29/2018

The Actor Search Tree Critic (ASTC) for Off-Policy POMDP Learning in Medical Decision Making

Off-policy reinforcement learning enables near-optimal policy from subop...
research
02/22/2022

SIPOMDPLite-Net: Lightweight, Self-Interested Learning and Planning in POSGs with Sparse Interactions

This work introduces sIPOMDPLite-net, a deep neural network (DNN) archit...
research
10/06/2021

Can an AI agent hit a moving target?

As the economies we live in are evolving over time, it is imperative tha...

Please sign up or login with your details

Forgot password? Click here to reset