AVLnet: Learning Audio-Visual Language Representations from Instructional Videos

06/16/2020
by   Andrew Rouditchenko, et al.
14

Current methods for learning visually grounded language from videos often rely on time-consuming and expensive data collection, such as human annotated textual summaries or machine generated automatic speech recognition transcripts. In this work, we introduce Audio-Video Language Network (AVLnet), a self-supervised network that learns a shared audio-visual embedding space directly from raw video inputs. We circumvent the need for annotation and instead learn audio-visual language representations directly from randomly segmented video clips and their raw audio waveforms. We train AVLnet on publicly available instructional videos and evaluate our model on video clip and language retrieval tasks on three video datasets. Our proposed model outperforms several state-of-the-art text-video baselines by up to 11.8 video clip retrieval task, despite operating on the raw audio instead of manually annotated text captions. Further, we show AVLnet is capable of integrating textual information, increasing its modularity and improving performance by up to 20.3 perform analysis of AVLnet's learned representations, showing our model has learned to relate visual objects with salient words and natural sounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset