CAM++: A Fast and Efficient Network for Speaker Verification Using Context-Aware Masking

by   Hui Wang, et al.

Time delay neural network (TDNN) has been proven to be efficient for speaker verification. One of its successful variants, ECAPA-TDNN, achieved state-of-the-art performance at the cost of much higher computational complexity and slower inference speed. This makes it inadequate for scenarios with demanding inference rate and limited computational resources. We are thus interested in finding an architecture that can achieve the performance of ECAPA-TDNN and the efficiency of vanilla TDNN. In this paper, we propose an efficient network based on context-aware masking, namely CAM++, which uses densely connected time delay neural network (D-TDNN) as backbone and adopts a novel multi-granularity pooling to capture contextual information at different levels. Extensive experiments on two public benchmarks, VoxCeleb and CN-Celeb, demonstrate that the proposed architecture outperforms other mainstream speaker verification systems with lower computational cost and faster inference speed.


page 1

page 2

page 3

page 4


CS-Rep: Making Speaker Verification Networks Embracing Re-parameterization

Automatic speaker verification (ASV) systems, which determine whether tw...

Rep Works in Speaker Verification

Multi-branch convolutional neural network architecture has raised lots o...

Crossed-Time Delay Neural Network for Speaker Recognition

Time Delay Neural Network (TDNN) is a well-performing structure for DNN-...

RSKNet-MTSP: Effective and Portable Deep Architecture for Speaker Verification

The convolutional neural network (CNN) based approaches have shown great...

Context-aware Dynamic Block

Although deeper and larger neural networks have achieved better performa...

FD-MobileNet: Improved MobileNet with a Fast Downsampling Strategy

We present Fast-Downsampling MobileNet (FD-MobileNet), an efficient and ...

Please sign up or login with your details

Forgot password? Click here to reset