Online Motion Generation with Sensory Information and Instructions by Hierarchical RNN

12/14/2017
by   Kanata Suzuki, et al.
0

This paper proposes an approach for robots to perform co-working task alongside humans by using neuro-dynamical models. The proposed model comprised two models: an Autoencoder and a hierarchical recurrent neural network (RNN). We trained hierarchical RNN with various sensory-motor sequences and instructions. To acquire the interactive ability to switch and combine appropriate motions according to visual information and instructions from outside, we embedded the cyclic neuronal dynamics in a network. To evaluate our model, we designed a cloth-folding task that consists of four short folding motions and three patterns of instruction that indicate the direction of each short motion. The results showed that the robot can perform the task by switching or combining short motions with instructions and visual information. We also showed that the proposed model acquired relationships between the instructions and sensory-motor information in its internal neuronal dynamics. Supplementary video: https://www.youtube.com/watch?v=oUBTJNpXW4A

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset