Universal Perturbation Attack on Differentiable No-Reference Image- and Video-Quality Metrics

11/01/2022
by   Ekaterina Shumitskaya, et al.
0

Universal adversarial perturbation attacks are widely used to analyze image classifiers that employ convolutional neural networks. Nowadays, some attacks can deceive image- and video-quality metrics. So sustainability analysis of these metrics is important. Indeed, if an attack can confuse the metric, an attacker can easily increase quality scores. When developers of image- and video-algorithms can boost their scores through detached processing, algorithm comparisons are no longer fair. Inspired by the idea of universal adversarial perturbation for classifiers, we suggest a new method to attack differentiable no-reference quality metrics through universal perturbation. We applied this method to seven no-reference image- and video-quality metrics (PaQ-2-PiQ, Linearity, VSFA, MDTVSFA, KonCept512, Nima and SPAQ). For each one, we trained a universal perturbation that increases the respective scores. We also propose a method for assessing metric stability and identify the metrics that are the most vulnerable and the most resistant to our attack. The existence of successful universal perturbations appears to diminish the metric's ability to provide reliable scores. We therefore recommend our proposed method as an additional verification of metric reliability to complement traditional subjective tests and benchmarks.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 9

research
05/24/2023

Fast Adversarial CNN-based Perturbation Attack on No-Reference Image- and Video-Quality Metrics

Modern neural-network-based no-reference image- and video-quality metric...
research
07/08/2019

Barriers towards no-reference metrics application to compressed video quality analysis: on the example of no-reference metric NIQE

This paper analyses the application of no-reference metric NIQE to the t...
research
07/10/2019

Video Distortion Method for VMAF Quality Values Increasing

Video quality measurement takes an important role in many applications. ...
research
12/11/2018

Adversarial Framing for Image and Video Classification

Neural networks are prone to adversarial attacks. In general, such attac...
research
03/11/2020

Frequency-Tuned Universal Adversarial Attacks

Researchers have shown that the predictions of a convolutional neural ne...
research
01/05/2023

Silent Killer: Optimizing Backdoor Trigger Yields a Stealthy and Powerful Data Poisoning Attack

We propose a stealthy and powerful backdoor attack on neural networks ba...
research
11/14/2018

Verification of Recurrent Neural Networks Through Rule Extraction

The verification problem for neural networks is verifying whether a neur...

Please sign up or login with your details

Forgot password? Click here to reset