Making Third Person Techniques Recognize First-Person Actions in Egocentric Videos

by   Sagar Verma, et al.

We focus on first-person action recognition from egocentric videos. Unlike third person domain, researchers have divided first-person actions into two categories: involving hand-object interactions and the ones without, and developed separate techniques for the two action categories. Further, it has been argued that traditional cues used for third person action recognition do not suffice, and egocentric specific features, such as head motion and handled objects have been used for such actions. Unlike the state-of-the-art approaches, we show that a regular two stream Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) architecture, having separate streams for objects and motion, can generalize to all categories of first-person actions. The proposed approach unifies the feature learned by all action categories, making the proposed architecture much more practical. In an important observation, we note that the size of the objects visible in the egocentric videos is much smaller. We show that the performance of the proposed model improves after cropping and resizing frames to make the size of objects comparable to the size of ImageNet's objects. Our experiments on the standard datasets: GTEA, EGTEA Gaze+, HUJI, ADL, UTE, and Kitchen, proves that our model significantly outperforms various state-of-the-art techniques.


page 1

page 3


Trajectory Aligned Features For First Person Action Recognition

Egocentric videos are characterised by their ability to have the first p...

Concurrence-Aware Long Short-Term Sub-Memories for Person-Person Action Recognition

Recently, Long Short-Term Memory (LSTM) has become a popular choice to m...

Modeling long-term interactions to enhance action recognition

In this paper, we propose a new approach to under-stand actions in egoce...

For Your Eyes Only: Learning to Summarize First-Person Videos

With the increasing amount of video data, it is desirable to highlight o...

Learning Representative Temporal Features for Action Recognition

In this paper we present a novel video classification methodology that a...

Motion Guided Attention Fusion to Recognize Interactions from Videos

We present a dual-pathway approach for recognizing fine-grained interact...

CHAM: action recognition using convolutional hierarchical attention model

Recently, the soft attention mechanism, which was originally proposed in...

Please sign up or login with your details

Forgot password? Click here to reset