Sequence Level Semantics Aggregation for Video Object Detection

07/15/2019
by   Haiping Wu, et al.
2

Video objection detection (VID) has been a rising research direction in recent years. A central issue of VID is the appearance degradation of video frames caused by fast motion. This problem is essentially ill-posed for a single frame. Therefore, aggregating useful features from other frames becomes a natural choice. Existing methods heavily rely on optical flow or recurrent neural networks for feature aggregation. However, these methods emphasize more on the temporal nearby frames. In this work, we argue that aggregating features in the whole sequence level will lead to more discriminative and robust features for video object detection. To achieve this goal, we devise a novel Sequence Level Semantics Aggregation (SELSA) module. We further demonstrate that the proposed method has a close relationship with the classical spectral clustering methods, thus providing a novel view to understand the VID problem. Lastly, we test our proposed method on the large-scale ImageNet VID dataset and EPIC KITCHENS dataset and archive new state-of-the-art results compared with previous works. Moreover, to achieve such superior performance, we do not need other complicated post-processing methods such as Seq-NMS or Tubelet rescoring as in previous works, which keeps our pipeline simple and clean.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset