VideoMCC: a New Benchmark for Video Comprehension

06/23/2016
by   Du Tran, et al.
0

While there is overall agreement that future technology for organizing, browsing and searching videos hinges on the development of methods for high-level semantic understanding of video, so far no consensus has been reached on the best way to train and assess models for this task. Casting video understanding as a form of action or event categorization is problematic as it is not fully clear what the semantic classes or abstractions in this domain should be. Language has been exploited to sidestep the problem of defining video categories, by formulating video understanding as the task of captioning or description. However, language is highly complex, redundant and sometimes ambiguous. Many different captions may express the same semantic concept. To account for this ambiguity, quantitative evaluation of video description requires sophisticated metrics, whose performance scores are typically hard to interpret by humans. This paper provides four contributions to this problem. First, we formulate Video Multiple Choice Caption (VideoMCC) as a new well-defined task with an easy-to-interpret performance measure. Second, we describe a general semi-automatic procedure to create benchmarks for this task. Third, we publicly release a large-scale video benchmark created with an implementation of this procedure and we include a human study that assesses human performance on our dataset. Finally, we propose and test a varied collection of approaches on this benchmark for the purpose of gaining a better understanding of the new challenges posed by video comprehension.

READ FULL TEXT

page 2

page 4

page 10

page 11

research
12/22/2016

Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task

We introduce a new multi-modal task for computer systems, posed as a com...
research
04/02/2021

Visual Semantic Role Labeling for Video Understanding

We propose a new framework for understanding and representing related sa...
research
01/15/2020

University of Amsterdam and Renmin University at TRECVID 2017: Searching Video, Detecting Events and Describing Video

In this paper, we summarize our TRECVID 2017 video recognition and retri...
research
02/25/2015

Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks

In this paper, we study the challenging problem of categorizing videos a...
research
05/01/2020

HLVU : A New Challenge to Test Deep Understanding of Movies the Way Humans do

In this paper we propose a new evaluation challenge and direction in the...
research
12/21/2016

Temporal Tessellation: A Unified Approach for Video Analysis

We present a general approach to video understanding, inspired by semant...
research
10/10/2022

Fighting FIRe with FIRE: Assessing the Validity of Text-to-Video Retrieval Benchmarks

Searching vast troves of videos with textual descriptions is a core mult...

Please sign up or login with your details

Forgot password? Click here to reset