Scale-invariant temporal history (SITH): optimal slicing of the past in an uncertain world

12/19/2017
by   Tyler A. Spears, et al.
0

In both the human brain and any general artificial intelligence (AI), a representation of the past is necessary to predict the future. However, perfect storage of all experiences is not possible. One possibility, utilized in many applications, is to retain information about the past in a buffer. A limitation of this approach is that although events in the buffer are represented with perfect accuracy, the resources necessary to represent information at a particular time scale go up rapidly. Here we present a neurally-plausible, compressed, scale-free memory representation we call Scale-Invariant Temporal History (SITH). This representation covers an exponentially large period of time in the past at the cost of sacrificing temporal accuracy for events further in the past. The form of this decay is scale-invariant and can be shown to be optimal in that it is able to respond to worlds with a wide range of time scales. We demonstrate the utility of this representation in learning to play a simple video game. In this environment, SITH exhibits better learning performance than a fixed-size buffer history representation. Whereas the buffer performs well as long as the temporal dependencies can be represented within the buffer, SITH performs well over a much larger range of time scales for the same amount of resources. Finally, we discuss how the application of SITH, along with other human-inspired models of cognition, could improve reinforcement and machine learning algorithms in general.

READ FULL TEXT
research
01/26/2021

Predicting the future with a scale-invariant temporal memory for the past

In recent years it has become clear that the brain maintains a temporal ...
research
06/19/2006

New Millennium AI and the Convergence of History

Artificial Intelligence (AI) has recently become a real formal science: ...
research
11/22/2012

Optimally fuzzy temporal memory

Any learner with the ability to predict the future of a structured time-...
research
09/18/2023

Contrastive Initial State Buffer for Reinforcement Learning

In Reinforcement Learning, the trade-off between exploration and exploit...
research
06/12/2014

Generic construction of scale-invariantly coarse grained memory

Encoding temporal information from the recent past as spatially distribu...
research
04/07/2022

Temporal Alignment for History Representation in Reinforcement Learning

Environments in Reinforcement Learning are usually only partially observ...
research
09/07/2021

Scale-invariant representation of machine learning

The success of machine learning stems from its structured data represent...

Please sign up or login with your details

Forgot password? Click here to reset