ARTiS: Appearance-based Action Recognition in Task Space for Real-Time Human-Robot Collaboration

10/18/2016
by   Markus Eich, et al.
0

To have a robot actively supporting a human during a collaborative task, it is crucial that robots are able to identify the current action in order to predict the next one. Common approaches make use of high-level knowledge, such as object affordances, semantics or understanding of actions in terms of pre- and post-conditions. These approaches often require hand-coded a priori knowledge, time- and resource-intensive or supervised learning techniques. We propose to reframe this problem as an appearance-based place recognition problem. In our framework, we regard sequences of visual images of human actions as a map in analogy to the visual place recognition problem. Observing the task for the second time, our approach is able to recognize pre-observed actions in a one-shot learning approach and is thereby able to recognize the current observation in the task space. We propose two new methods for creating and aligning action observations within a task map. We compare and verify our approaches with real data of humans assembling several types of IKEA flat packs.

READ FULL TEXT

page 1

page 4

page 7

research
07/07/2022

Sociable and Ergonomic Human-Robot Collaboration through Action Recognition and Augmented Hierarchical Quadratic Programming

The recognition of actions performed by humans and the anticipation of t...
research
07/18/2023

Fusing Hand and Body Skeletons for Human Action Recognition in Assembly

As collaborative robots (cobots) continue to gain popularity in industri...
research
04/19/2019

Simple yet efficient real-time pose-based action recognition

Recognizing human actions is a core challenge for autonomous systems as ...
research
02/05/2018

One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning

Humans and animals are capable of learning a new behavior by observing o...
research
05/16/2019

Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

Object manipulation actions represent an important share of the Activiti...
research
02/26/2019

Beyond the Self: Using Grounded Affordances to Interpret and Describe Others' Actions

We propose a developmental approach that allows a robot to interpret and...
research
02/01/2021

"Grip-that-there": An Investigation of Explicit and Implicit Task Allocation Techniques for Human-Robot Collaboration

In ad-hoc human-robot collaboration (HRC), humans and robots work on a t...

Please sign up or login with your details

Forgot password? Click here to reset