Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser

04/08/2022
by   Sonal Joshi, et al.
0

Adversarial attacks are a threat to automatic speech recognition (ASR) systems, and it becomes imperative to propose defenses to protect them. In this paper, we perform experiments to show that K2 conformer hybrid ASR is strongly affected by white-box adversarial attacks. We propose three defenses–denoiser pre-processor, adversarially fine-tuning ASR model, and adversarially fine-tuning joint model of ASR and denoiser. Our evaluation shows denoiser pre-processor (trained on offline adversarial examples) fails to defend against adaptive white-box attacks. However, adversarially fine-tuning the denoiser using a tandem model of denoiser and ASR offers more robustness. We evaluate two variants of this defense–one updating parameters of both models and the second keeping ASR frozen. The joint model offers a mean absolute decrease of 19.3% ground truth (GT) WER with reference to baseline against fast gradient sign method (FGSM) attacks with different L_∞ norms. The joint model with frozen ASR parameters gives the best defense against projected gradient descent (PGD) with 7 iterations, yielding a mean absolute increase of 22.3% GT WER with reference to baseline; and against PGD with 500 iterations, yielding a mean absolute decrease of 45.08% GT WER and an increase of 68.05% adversarial target WER.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

Mel Frequency Spectral Domain Defenses against Adversarial Attacks on Speech Recognition Systems

A variety of recent works have looked into defenses for deep neural netw...
research
03/31/2021

Adversarial Attacks and Defenses for Speech Recognition Systems

The ubiquitous presence of machine learning systems in our lives necessi...
research
12/14/2021

Robustifying automatic speech recognition by extracting slowly varying features

In the past few years, it has been shown that deep learning systems are ...
research
07/12/2021

Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems

In this paper we investigate speech denoising as a defense against adver...
research
03/29/2022

Recent improvements of ASR models in the face of adversarial attacks

Like many other tasks involving neural networks, Speech Recognition mode...
research
03/31/2020

Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement

Recent studies have highlighted adversarial examples as ubiquitous threa...
research
04/20/2023

Towards the Universal Defense for Query-Based Audio Adversarial Attacks

Recently, studies show that deep learning-based automatic speech recogni...

Please sign up or login with your details

Forgot password? Click here to reset