Towards the Universal Defense for Query-Based Audio Adversarial Attacks

by   Feng Guo, et al.

Recently, studies show that deep learning-based automatic speech recognition (ASR) systems are vulnerable to adversarial examples (AEs), which add a small amount of noise to the original audio examples. These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices. The existing defense methods are either limited in application or only defend on results, but not on process. In this work, we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process. The insight of this method is based on the observation: many existing audio AE attacks utilize query-based methods, which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process. Inspired by this observation, We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query. Thus, we can identify when a sequence of queries appears to be suspectable to generate audio AEs. Through extensive evaluation on four state-of-the-art audio AE attacks, we demonstrate that on average our defense identify the adversary intent with over 90 careful regard for robustness evaluations, we also analyze our proposed defense and its strength to withstand two adaptive attacks. Finally, our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.


WaveGuard: Understanding and Mitigating Audio Adversarial Examples

There has been a recent surge in adversarial attacks on deep learning ba...

Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

With the advancement of deep learning based speech recognition technolog...

Detecting Audio Attacks on ASR Systems with Dropout Uncertainty

Various adversarial audio attacks have recently been developed to fool a...

Characterizing Audio Adversarial Examples Using Temporal Dependency

Recent studies have highlighted adversarial examples as a ubiquitous thr...

aaeCAPTCHA: The Design and Implementation of Audio Adversarial CAPTCHA

CAPTCHAs are designed to prevent malicious bot programs from abusing web...

Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser

Adversarial attacks are a threat to automatic speech recognition (ASR) s...

Leveraging characteristics of the output probability distribution for identifying adversarial audio examples

Adversarial attacks represent a security threat to machine learning base...

Please sign up or login with your details

Forgot password? Click here to reset