Benchmarking Adversarially Robust Quantum Machine Learning at Scale

11/23/2022
by   Maxwell T. West, et al.
0

Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology and industry. Despite their accuracy and sophistication, neural networks can be easily fooled by carefully designed malicious inputs known as adversarial attacks. While such vulnerabilities remain a serious challenge for classical neural networks, the extent of their existence is not fully understood in the quantum ML setting. In this work, we benchmark the robustness of quantum ML networks, such as quantum variational classifiers (QVC), at scale by performing rigorous training for both simple and complex image datasets and through a variety of high-end adversarial attacks. Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks by learning features which are not detected by the classical neural networks, indicating a possible quantum advantage for ML tasks. Contrarily, and remarkably, the converse is not true, with attacks on quantum networks also capable of deceiving classical neural networks. By combining quantum and classical network outcomes, we propose a novel adversarial attack detection technology. Traditionally quantum advantage in ML systems has been sought through increased accuracy or algorithmic speed-up, but our work has revealed the potential for a new kind of quantum advantage through superior robustness of ML models, whose practical realisation will address serious security concerns and reliability issues of ML algorithms employed in a myriad of applications including autonomous vehicles, cybersecurity, and surveillance robotic systems.

READ FULL TEXT

page 2

page 6

page 7

page 16

page 20

page 21

page 22

page 23

research
06/22/2023

Towards quantum enhanced adversarial robustness in machine learning

Machine learning algorithms are powerful tools for data driven tasks suc...
research
05/07/2019

Machine Learning Cryptanalysis of a Quantum Random Number Generator

Random number generators (RNGs) that are crucial for cryptographic appli...
research
12/21/2020

Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines

We provide a robust defence to adversarial attacks on discriminative alg...
research
11/08/2017

LatentPoison - Adversarial Attacks On The Latent Space

Robustness and security of machine learning (ML) systems are intertwined...
research
09/12/2023

Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning

Machine learning is at the center of mainstream technology and outperfor...
research
05/03/2022

On the uncertainty principle of neural networks

Despite the successes in many fields, it is found that neural networks a...

Please sign up or login with your details

Forgot password? Click here to reset