Entity-aware and Motion-aware Transformers for Language-driven Action Localization in Videos

05/12/2022
by   Shuo Yang, et al.
0

Language-driven action localization in videos is a challenging task that involves not only visual-linguistic matching but also action boundary prediction. Recent progress has been achieved through aligning language query to video segments, but estimating precise boundaries is still under-explored. In this paper, we propose entity-aware and motion-aware Transformers that progressively localizes actions in videos by first coarsely locating clips with entity queries and then finely predicting exact boundaries in a shrunken temporal region with motion queries. The entity-aware Transformer incorporates the textual entities into visual representation learning via cross-modal and cross-frame attentions to facilitate attending action-related video clips. The motion-aware Transformer captures fine-grained motion changes at multiple temporal scales via integrating long short-term memory into the self-attention module to further improve the precision of action boundary prediction. Extensive experiments on the Charades-STA and TACoS datasets demonstrate that our method achieves better performance than existing methods.

READ FULL TEXT

page 1

page 2

page 6

research
08/05/2021

TransRefer3D: Entity-and-Relation Aware Transformer for Fine-Grained 3D Visual Grounding

Recently proposed fine-grained 3D visual grounding is an essential and c...
research
11/17/2022

ReLER@ZJU Submission to the Ego4D Moment Queries Challenge 2022

In this report, we present the ReLER@ZJU1 submission to the Ego4D Moment...
research
06/28/2019

Localizing Unseen Activities in Video via Image Query

Action localization in untrimmed videos is an important topic in the fie...
research
07/20/2022

HTNet: Anchor-free Temporal Action Localization with Hierarchical Transformers

Temporal action localization (TAL) is a task of identifying a set of act...
research
09/11/2019

Temporally Grounding Language Queries in Videos by Contextual Boundary-aware Prediction

The task of temporally grounding language queries in videos is to tempor...
research
11/27/2016

Long-Term Image Boundary Prediction

Boundary estimation in images and videos has been a very active topic of...
research
09/07/2021

Learning to Combine the Modalities of Language and Video for Temporal Moment Localization

Temporal moment localization aims to retrieve the best video segment mat...

Please sign up or login with your details

Forgot password? Click here to reset