What have we learned from deep representations for action recognition?

01/04/2018
by   Christoph Feichtenhofer, et al.
0

As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing what two-stream models have learned in order to recognize actions in video. We show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncracies of training data and to explain failure cases of the system.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

research
04/01/2021

Motion Guided Attention Fusion to Recognize Interactions from Videos

We present a dual-pathway approach for recognizing fine-grained interact...
research
08/07/2019

STM: SpatioTemporal and Motion Encoding for Action Recognition

Spatiotemporal and motion features are two complementary and crucial inf...
research
04/03/2020

Two-Stream AMTnet for Action Detection

In this paper, we propose Two-Stream AMTnet, which leverages recent adva...
research
06/06/2022

A Deeper Dive Into What Deep Spatiotemporal Networks Encode: Quantifying Static vs. Dynamic Information

Deep spatiotemporal models are used in a variety of computer vision task...
research
02/10/2020

Joint Encoding of Appearance and Motion Features with Self-supervision for First Person Action Recognition

Wearable cameras are becoming more and more popular in several applicati...
research
12/10/2019

HalluciNet-ing Spatiotemporal Representations Using 2D-CNN

Spatiotemporal representations learnt using 3D convolutional neural netw...
research
09/01/2019

Learning Visual Features Under Motion Invariance

Humans are continuously exposed to a stream of visual data with a natura...

Please sign up or login with your details

Forgot password? Click here to reset