Learning Human-arm Reaching Motion Using IMU in Human-Robot Collaboration

by   Nadav D. Kahanowich, et al.

Many tasks performed by two humans require mutual interaction between arms such as handing-over tools and objects. In order for a robotic arm to interact with a human in the same way, it must reason about the location of the human arm in real-time. Furthermore and to acquire interaction in a timely manner, the robot must be able predict the final target of the human in order to plan and initiate motion beforehand. In this paper, we explore the use of a low-cost wearable device equipped with two inertial measurement units (IMU) for learning reaching motion for real-time applications of Human-Robot Collaboration (HRC). A wearable device can replace or be complementary to visual perception in cases of bad lighting or occlusions in a cluttered environment. We first train a neural-network model to estimate the current location of the arm. Then, we propose a novel model based on a recurrent neural-network to predict the future target of the human arm during motion in real-time. Early prediction of the target grants the robot with sufficient time to plan and initiate motion during the motion of the human. The accuracies of the models are analyzed concerning the features included in the motion representation. Through experiments and real demonstrations with a robotic arm, we show that sufficient accuracy is achieved for feasible HRC without any visual perception. Once trained, the system can be deployed in various spaces with no additional effort. The models exhibit high accuracy for various initial poses of the human arm. Moreover, the trained models are shown to provide high success rates with additional human participants not included in the model training.


page 1

page 4

page 7

page 10

page 11


Fast human motion prediction for human-robot collaboration with wearable interfaces

In this paper, we aim at improving human motion prediction during human-...

Recognition and Estimation of Human Finger Pointing with an RGB Camera for Robot Directive

In communication between humans, gestures are often preferred or complem...

Deep learning-based approaches for human motion decoding in smart walkers for rehabilitation

Gait disabilities are among the most frequent worldwide. Their treatment...

Human Motion Prediction using Adaptable Neural Networks

Human motion prediction is an important component to facilitate human ro...

CobotGear: Interaction with Collaborative Robots using Wearable Optical Motion Capturing Systems

In industrial applications, complex tasks require human collaboration si...

An Incremental Self-Organizing Architecture for Sensorimotor Learning and Prediction

During visuomotor tasks, robots have to compensate for the temporal dela...

Toward a Wearable Biosensor Ecosystem on ROS 2 for Real-time Human-Robot Interaction Systems

Wearable biosensors can enable continuous human data capture, facilitati...

Please sign up or login with your details

Forgot password? Click here to reset