Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

by   Jonathan Juett, et al.

The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.


page 3

page 13

page 17

page 21

page 27

page 30

page 31

page 32


Reaching, Grasping and Re-grasping: Learning Multimode Grasping Skills

The ability to adapt to uncertainties, recover from failures, and coordi...

Intrinsic Motivation in Object-Action-Outcome Blending Latent Space

One effective approach for equipping artificial agents with sensorimotor...

Reaching, Grasping and Re-grasping: Learning Fine Coordinated Motor Skills

The ability to adapt to uncertainties, recover from failures, and sensor...

Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators

We propose a control pipeline for SAG (Searching, Approaching, and Grasp...

GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping

Generating digital humans that move realistically has many applications ...

Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks

The act of reaching for an object is a fundamental yet complex skill for...

When the goal is to generate a series of activities: A self-organized simulated robot arm

Behavior is characterized by sequences of goal-oriented conducts, such a...

Please sign up or login with your details

Forgot password? Click here to reset