Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network

by   Takazumi Matsumoto, et al.

It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories.


page 1

page 2

page 3

page 4


Goal-directed Planning and Goal Understanding by Active Inference: Evaluation Through Simulated and Physical Robot Experiments

We show that goal-directed action planning and generation in a teleologi...

Bidirectional Interaction between Visual and Motor Generative Models using Predictive Coding and Active Inference

In this work, we build upon the Active Inference (AIF) and Predictive Co...

Initialization of Latent Space Coordinates via Random Linear Projections for Learning Robotic Sensory-Motor Sequences

Robot kinematics data, despite being a high dimensional process, is high...

Learning to Embed Probabilistic Structures Between Deterministic Chaos and Random Process in a Variational Bayes Predictive-Coding RNN

This study introduces a stochastic predictive-coding RNN model that can ...

Generating goal-directed visuomotor plans based on learning using a predictive coding type deep visuomotor recurrent neural network model

The current paper presents how a predictive coding type deep recurrent n...

Goal-Directed Behavior under Variational Predictive Coding: Dynamic Organization of Visual Attention and Working Memory

Mental simulation is a critical cognitive function for goal-directed beh...

Learning, Planning, and Control in a Monolithic Neural Event Inference Architecture

We introduce a dynamic artificial neural network-based (ANN) adaptive in...

Please sign up or login with your details

Forgot password? Click here to reset