Learning abstract perceptual notions: the example of space

by   Alexander V. Terekhov, et al.

Humans are extremely swift learners. We are able to grasp highly abstract notions, whether they come from art perception or pure mathematics. Current machine learning techniques demonstrate astonishing results in extracting patterns in information. Yet the abstract notions we possess are more than just statistical patterns in the incoming information. Sensorimotor theory suggests that they represent functions, laws, describing how the information can be transformed, or, in other words, they represent the statistics of sensorimotor changes rather than sensory inputs themselves. The aim of our work is to suggest a way for machine learning and sensorimotor theory to benefit from each other so as to pave the way toward new horizons in learning. We show in this study that a highly abstract notion, that of space, can be seen as a collection of laws of transformations of sensory information and that these laws could in theory be learned by a naive agent. As an illustration we do a one-dimensional simulation in which an agent extracts spatial knowledge in the form of internalized ("sensible") rigid displacements. The agent uses them to encode its own displacements in a way which is isometrically related to external space. Though the algorithm allowing acquisition of rigid displacements is designed ad hoc, we believe it can stimulate the development of unsupervised learning techniques leading to similar results.


Space as an invention of biological organisms

The question of the nature of space around us has occupied thinkers sinc...

Proving Type Class Laws for Haskell

Type classes in Haskell are used to implement ad-hoc polymorphism, i.e. ...

A Sensorimotor Perspective on Grounding the Semantic of Simple Visual Features

In Machine Learning and Robotics, the semantic content of visual feature...

Learning agent's spatial configuration from sensorimotor invariants

The design of robotic systems is largely dictated by our purely human in...

Scale-invariant representation of machine learning

The success of machine learning stems from its structured data represent...

Towards an Intelligent Microscope: adaptively learned illumination for optimal sample classification

Recent machine learning techniques have dramatically changed how we proc...

Homomorphism Autoencoder – Learning Group Structured Representations from Observed Transitions

How can we acquire world models that veridically represent the outside w...

Please sign up or login with your details

Forgot password? Click here to reset