Selective Structured State-Spaces for Long-Form Video Understanding

03/25/2023
by   Jue Wang, et al.
9

Effective modeling of complex spatiotemporal dependencies in long-form videos remains an open problem. The recently proposed Structured State-Space Sequence (S4) model with its linear complexity offers a promising direction in this space. However, we demonstrate that treating all image-tokens equally as done by S4 model can adversely affect its efficiency and accuracy. To address this limitation, we present a novel Selective S4 (i.e., S5) model that employs a lightweight mask generator to adaptively select informative image tokens resulting in more efficient and accurate modeling of long-term spatiotemporal dependencies in videos. Unlike previous mask-based token reduction methods used in transformers, our S5 model avoids the dense self-attention calculation by making use of the guidance of the momentum-updated S4 model. This enables our model to efficiently discard less informative tokens and adapt to various long-form video understanding tasks more effectively. However, as is the case for most token reduction methods, the informative image tokens could be dropped incorrectly. To improve the robustness and the temporal horizon of our model, we propose a novel long-short masked contrastive learning (LSMCL) approach that enables our model to predict longer temporal context using shorter input videos. We present extensive comparative results using three challenging long-form video understanding datasets (LVU, COIN and Breakfast), demonstrating that our approach consistently outperforms the previous state-of-the-art S4 model by up to 9.6

READ FULL TEXT
research
07/31/2023

MovieChat: From Dense Token to Sparse Memory for Long Video Understanding

Recently, integrating video foundation models and large language models ...
research
06/21/2021

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?

In this paper, we introduce a novel visual representation learning which...
research
04/04/2022

Long Movie Clip Classification with State-Space Video Models

Most modern video recognition models are designed to operate on short vi...
research
06/04/2022

Video-based Human-Object Interaction Detection from Tubelet Tokens

We present a novel vision Transformer, named TUTOR, which is able to lea...
research
03/15/2023

EgoViT: Pyramid Video Transformer for Egocentric Action Recognition

Capturing interaction of hands with objects is important to autonomously...
research
12/29/2022

Efficient Movie Scene Detection using State-Space Transformers

The ability to distinguish between different movie scenes is critical fo...
research
05/19/2023

Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity

Reconstructing human vision from brain activities has been an appealing ...

Please sign up or login with your details

Forgot password? Click here to reset