Changing the Environment Based on Empowerment as Intrinsic Motivation

06/03/2014
by   Christoph Salge, et al.
0

One aspect of intelligence is the ability to restructure your own environment so that the world you live in becomes more beneficial to you. In this paper we investigate how the information-theoretic measure of agent empowerment can provide a task-independent, intrinsic motivation to restructure the world. We show how changes in embodiment and in the environment change the resulting behaviour of the agent and the artefacts left in the world. For this purpose, we introduce an approximation of the established empowerment formalism based on sparse sampling, which is simpler and significantly faster to compute for deterministic dynamics. Sparse sampling also introduces a degree of randomness into the decision making process, which turns out to beneficial for some cases. We then utilize the measure to generate agent behaviour for different agent embodiments in a Minecraft-inspired three dimensional block world. The paradigmatic results demonstrate that empowerment can be used as a suitable generic intrinsic motivation to not only generate actions in given static environments, as shown in the past, but also to modify existing environmental conditions. In doing so, the emerging strategies to modify an agent's environment turn out to be meaningful to the specific agent capabilities, i.e., de facto to its embodiment.

READ FULL TEXT

page 17

page 21

page 22

research
02/21/2018

Emergence of Structured Behaviors from Curiosity-Based Intrinsic Motivation

Infants are experts at playing, with an amazing ability to generate nove...
research
12/29/2022

Intrinsic Motivation in Dynamical Control Systems

Biological systems often choose actions without an explicit reward signa...
research
02/22/2023

Exploration by self-supervised exploitation

Reinforcement learning can solve decision-making problems and train an a...
research
05/07/2023

A Framework for Characterizing Novel Environment Transformations in General Environments

To be robust to surprising developments, an intelligent agent must be ab...
research
12/04/2019

Learning Efficient Representation for Intrinsic Motivation

Mutual Information between agent Actions and environment States (MIAS) q...
research
07/01/2019

MULEX: Disentangling Exploitation from Exploration in Deep RL

An agent learning through interactions should balance its action selecti...
research
03/27/2018

Accelerating Empowerment Computation with UCT Tree Search

Models of intrinsic motivation present an important means to produce sen...

Please sign up or login with your details

Forgot password? Click here to reset