Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and Language

03/07/2022
by   Otniel-Bogdan Mercea, et al.
7

Learning to classify video data from classes not included in the training data, i.e. video-based zero-shot learning, is challenging. We conjecture that the natural alignment between the audio and visual modalities in video data provides a rich training signal for learning discriminative multi-modal representations. Focusing on the relatively underexplored task of audio-visual zero-shot learning, we propose to learn multi-modal representations from audio-visual data using cross-modal attention and exploit textual label embeddings for transferring knowledge from seen classes to unseen classes. Taking this one step further, in our generalised audio-visual zero-shot learning setting, we include all the training classes in the test-time search space which act as distractors and increase the difficulty while making the setting more realistic. Due to the lack of a unified benchmark in this domain, we introduce a (generalised) zero-shot learning benchmark on three audio-visual datasets of varying sizes and difficulty, VGGSound, UCF, and ActivityNet, ensuring that the unseen test classes do not appear in the dataset used for supervised training of the backbone deep models. Comparing multiple relevant and recent methods, we demonstrate that our proposed AVCA model achieves state-of-the-art performance on all three datasets. Code and data will be available at <https://github.com/ExplainableML/AVCA-GZSL>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2022

Temporal and cross-modal attention for audio-visual zero-shot learning

Audio-visual generalised zero-shot learning for video classification req...
research
05/27/2020

AVGZSLNet: Audio-Visual Generalized Zero-Shot Learning by Reconstructing Label Features from Multi-Modal Embeddings

In this paper, we solve for the problem of generalized zero-shot learnin...
research
09/07/2023

Text-to-feature diffusion for audio-visual few-shot learning

Training deep learning models for video classification from audio-visual...
research
08/24/2023

Hyperbolic Audio-visual Zero-shot Learning

Audio-visual zero-shot learning aims to classify samples consisting of a...
research
03/03/2020

Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications

Trained on large datasets, deep learning (DL) can accurately classify vi...
research
09/19/2023

Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping

We focus on the task of soundscape mapping, which involves predicting th...
research
10/18/2021

Who calls the shots? Rethinking Few-Shot Learning for Audio

Few-shot learning aims to train models that can recognize novel classes ...

Please sign up or login with your details

Forgot password? Click here to reset