Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition

by   Yang Liu, et al.

Existing deep learning methods for action recognition in videos require a large number of labeled videos for training, which is labor-intensive and time-consuming. For the same action, the knowledge learned from different media types, e.g., videos and images, may be related and complementary. However, due to the domain shifts and heterogeneous feature representations between videos and images, the performance of classifiers trained on images may be dramatically degraded when directly deployed to videos. In this paper, we propose a novel method, named Deep Image-to-Video Adaptation and Fusion Networks (DIVAFN), to enhance action recognition in videos by transferring knowledge from images using video keyframes as a bridge. The DIVAFN is a unified deep learning model, which integrates domain-invariant representations learning and cross-modal feature fusion into a unified optimization framework. Specifically, we design an efficient cross-modal similarities metric to reduce the modality shift among images, keyframes and videos. Then, we adopt an autoencoder architecture, whose hidden layer is constrained to be the semantic representations of the action class names. In this way, when the autoencoder is adopted to project the learned features from different domains to the same space, more compact, informative and discriminative representations can be obtained. Finally, the concatenation of the learned semantic feature representations from these three autoencoders are used to train the classifier for action recognition in videos. Comprehensive experiments on four real-world datasets show that our method outperforms some state-of-the-art domain adaptation and action recognition methods.


page 1

page 8

page 10

page 11


Learning Cross-modal Contrastive Features for Video Domain Adaptation

Learning transferable and domain adaptive feature representations from v...

Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks

Existing deep learning methods of video recognition usually require a la...

CMAE-V: Contrastive Masked Autoencoders for Video Action Recognition

Contrastive Masked Autoencoder (CMAE), as a new self-supervised framewor...

Hierarchically Learned View-Invariant Representations for Cross-View Action Recognition

Recognizing human actions from varied views is challenging due to huge a...

EXMOVES: Classifier-based Features for Scalable Action Recognition

This paper introduces EXMOVES, learned exemplar-based features for effic...

Attention Transfer from Web Images for Video Recognition

Training deep learning based video classifiers for action recognition re...

Unsupervised Learning of Video Representations via Dense Trajectory Clustering

This paper addresses the task of unsupervised learning of representation...

Please sign up or login with your details

Forgot password? Click here to reset