Object Referring in Videos with Language and Human Gaze

01/04/2018
by   Arun Balajee Vasudevan, et al.
0

We investigate the problem of object referring (OR) i.e. to localize a target object in a visual scene coming with a language description. Humans perceive the world more as continued video snippets than as static images, and describe objects not only by their appearance, but also by their temporal-spatial contexts and motion features. Humans also gaze at the object when they issue a referring expression. Existing works for OR mostly focus on static images only, which fall short in providing many such cues. This paper addresses OR in videos with language and human gaze. To that end, we present a new video dataset for OR, with 30, 000 objects over 5, 000 stereo video sequences annotated for their descriptions and gaze. We further propose a novel network model for OR in videos, by integrating appearance, motion, gaze, and spatial-temporal contextual information all into one network. Experimental results shows that our method effectively utilizes motion cues, human gaze, and spatial-temporal context information. Our method outperforms previous OR methods. The dataset and code will be made available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset