Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning

08/18/2023
by   Miguel Sarabia, et al.
0

We present Spatial LibriSpeech, a spatial audio dataset with over 650 hours of 19-channel audio, first-order ambisonics, and optional distractor noise. Spatial LibriSpeech is designed for machine learning model training, and it includes labels for source position, speaking direction, room acoustics and geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples with 200k+ simulated acoustic conditions across 8k+ synthetic rooms. To demonstrate the utility of our dataset, we train models on four spatial audio tasks, resulting in a median absolute error of 6.60 on 3D source localization, 0.43m on distance, 90.66ms on T30, and 2.74dB on DRR estimation. We show that the same models generalize well to widely-used evaluation datasets, e.g., obtaining a median absolute error of 12.43 on 3D source localization on TUT Sound Events 2018, and 157.32ms on T30 estimation on ACE Challenge.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset