AutoJoin: Efficient Adversarial Training for Robust Maneuvering via Denoising Autoencoder and Joint Learning

05/22/2022
by   Michael Villarreal, et al.
0

As a result of increasingly adopted machine learning algorithms and ubiquitous sensors, many 'perception-to-control' systems have been deployed in various settings. For these systems to be trustworthy, we need to improve their robustness with adversarial training being one approach. In this work, we propose a gradient-free adversarial training technique, called AutoJoin. AutoJoin is a very simple yet effective and efficient approach to produce robust models for imaged-based autonomous maneuvering. Compared to other SOTA methods with testing on over 5M perturbed and clean images, AutoJoin achieves significant performance increases up to the 40 while improving on clean performance for almost every dataset tested. In particular, AutoJoin can triple the clean performance improvement compared to the SOTA work by Shen et al. Regarding efficiency, AutoJoin demonstrates strong advantages over other SOTA techniques by saving up to 83 epoch and 90 attachment to the original regression model creating a denoising autoencoder within the architecture. This allows the tasks 'steering' and 'denoising sensor input' to be jointly learnt and enable the two tasks to reinforce each other's performance.

READ FULL TEXT

page 3

page 4

research
02/17/2020

CAT: Customized Adversarial Training for Improved Robustness

Adversarial training has become one of the most effective methods for im...
research
07/01/2018

Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification

It has been demonstrated that deep neural networks are prone to noisy ex...
research
09/12/2019

Transferable Adversarial Robustness using Adversarially Trained Autoencoders

Machine learning has proven to be an extremely useful tool for solving c...
research
08/19/2022

A Novel Plug-and-Play Approach for Adversarially Robust Generalization

In this work, we propose a robust framework that employs adversarially r...
research
10/29/2018

Logit Pairing Methods Can Fool Gradient-Based Attacks

Recently, several logit regularization methods have been proposed in [Ka...
research
03/18/2020

Improving Adversarial Robustness Through Progressive Hardening

Adversarial training (AT) has become a popular choice for training robus...
research
05/16/2023

Releasing Inequality Phenomena in L_∞-Adversarial Training via Input Gradient Distillation

Since adversarial examples appeared and showed the catastrophic degradat...

Please sign up or login with your details

Forgot password? Click here to reset