An Enhanced Res2Net with Local and Global Feature Fusion for Speaker Verification

by   Yafeng Chen, et al.

Effective fusion of multi-scale features is crucial for improving speaker verification performance. While most existing methods aggregate multi-scale features in a layer-wise manner via simple operations, such as summation or concatenation. This paper proposes a novel architecture called Enhanced Res2Net (ERes2Net), which incorporates both local and global feature fusion techniques to improve the performance. The local feature fusion (LFF) fuses the features within one single residual block to extract the local signal. The global feature fusion (GFF) takes acoustic features of different scales as input to aggregate global signal. To facilitate effective feature fusion in both LFF and GFF, an attentional feature fusion module is employed in the ERes2Net architecture, replacing summation or concatenation operations. A range of experiments conducted on the VoxCeleb datasets demonstrate the superiority of the ERes2Net in speaker verification.


page 1

page 2

page 3

page 4


InterFormer: Interactive Local and Global Features Fusion for Automatic Speech Recognition

The local and global features are both essential for automatic speech re...

Multibiometric: Feature Level Fusion Using FKP Multi-Instance biometric

This paper proposed the use of multi-instance feature level fusion as a ...

Multi-Scale Hourglass Hierarchical Fusion Network for Single Image Deraining

Rain streaks bring serious blurring and visual quality degradation, whic...

Improving Convolutional Neural Networks for Fault Diagnosis by Assimilating Global Features

Deep learning techniques have become prominent in modern fault diagnosis...

GLFF: Global and Local Feature Fusion for Face Forgery Detection

With the rapid development of deep generative models (such as Generative...

Please sign up or login with your details

Forgot password? Click here to reset