Guess First to Enable Better Compression and Adversarial Robustness

01/10/2020
by   Sicheng Zhu, et al.
0

Machine learning models are generally vulnerable to adversarial examples, which is in contrast to the robustness of humans. In this paper, we try to leverage one of the mechanisms in human recognition and propose a bio-inspired classification framework in which model inference is conditioned on label hypothesis. We provide a class of training objectives for this framework and an information bottleneck regularizer which utilizes the advantage that label information can be discarded during inference. This framework enables better compression of the mutual information between inputs and latent representations without loss of learning capacity, at the cost of tractable inference complexity. Better compression and elimination of label information further bring better adversarial robustness without loss of natural accuracy, which is demonstrated in the experiment.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset