Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

by   Alexander Immer, et al.

Data augmentation is commonly applied to improve performance of deep learning by enforcing the knowledge that certain transformations on the input preserve the output. Currently, the correct data augmentation is chosen by human effort and costly cross-validation, which makes it cumbersome to apply to new datasets. We develop a convenient gradient-based method for selecting the data augmentation. Our approach relies on phrasing data augmentation as an invariance in the prior distribution and learning it using Bayesian model selection, which has been shown to work in Gaussian processes, but not yet for deep neural networks. We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective, which can be optimised without human supervision or validation data. We show that our method can successfully recover invariances present in the data, and that this improves generalisation on image datasets.


page 1

page 3


Last Layer Marginal Likelihood for Invariance Learning

Data augmentation is often used to incorporate inductive biases into mod...

Deep invariant networks with differentiable augmentation layers

Designing learning systems which are invariant to certain data transform...

Learning robust visual representations using data augmentation invariance

Deep convolutional neural networks trained for image object categorizati...

Saliency Map Based Data Augmentation

Data augmentation is a commonly applied technique with two seemingly rel...

Faster AutoAugment: Learning Augmentation Strategies using Backpropagation

Data augmentation methods are indispensable heuristics to boost the perf...

CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG Signals

Data augmentation is a key element of deep learning pipelines, as it inf...

Please sign up or login with your details

Forgot password? Click here to reset