Multi-Modal Recognition of Worker Activity for Human-Centered Intelligent Manufacturing

08/20/2019
by   Wenjin Tao, et al.
2

In a human-centered intelligent manufacturing system, sensing and understanding of the worker's activity are the primary tasks. In this paper, we propose a novel multi-modal approach for worker activity recognition by leveraging information from different sensors and in different modalities. Specifically, a smart armband and a visual camera are applied to capture Inertial Measurement Unit (IMU) signals and videos, respectively. For the IMU signals, we design two novel feature transform mechanisms, in both frequency and spatial domains, to assemble the captured IMU signals as images, which allow using convolutional neural networks to learn the most discriminative features. Along with the above two modalities, we propose two other modalities for the video data, at the video frame and video clip levels, respectively. Each of the four modalities returns a probability distribution on activity prediction. Then, these probability distributions are fused to output the worker activity classification result. A worker activity dataset of 6 activities is established, which at present contains 6 common activities in assembly tasks, i.e., grab a tool/part, hammer a nail, use a power-screwdriver, rest arms, turn a screwdriver, and use a wrench. The developed multi-modal approach is evaluated on this dataset and achieves recognition accuracies as high as 97 respectively.

READ FULL TEXT

page 2

page 4

page 5

page 7

page 8

page 15

research
08/13/2019

MEx: Multi-modal Exercises Dataset for Human Activity Recognition

MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal datase...
research
01/09/2019

Adaptive Feature Processing for Robust Human Activity Recognition on a Novel Multi-Modal Dataset

Human Activity Recognition (HAR) is a key building block of many emergin...
research
08/12/2022

Real-time assembly operation recognition with fog computing and transfer learning for human-centered intelligent manufacturing

In a human-centered intelligent manufacturing system, every element is t...
research
05/28/2019

Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data

There has been significant amount of research work on human activity cla...
research
10/29/2019

Model enhancement and personalization using weakly supervised learning for multi-modal mobile sensing

Always-on sensing of mobile device user's contextual information is crit...
research
08/19/2013

Seeing What You're Told: Sentence-Guided Activity Recognition In Video

We present a system that demonstrates how the compositional structure of...
research
10/11/2022

ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

We introduce ViLPAct, a novel vision-language benchmark for human activi...

Please sign up or login with your details

Forgot password? Click here to reset