DeepAI AI Chat
Log In Sign Up

Learning Invariant Weights in Neural Networks

Assumptions about invariances or symmetries in data can significantly increase the predictive power of statistical models. Many commonly used models in machine learning are constraint to respect certain symmetries in the data, such as translation equivariance in convolutional neural networks, and incorporation of new symmetry types is actively being studied. Yet, efforts to learn such invariances from the data itself remains an open research problem. It has been shown that marginal likelihood offers a principled way to learn invariances in Gaussian Processes. We propose a weight-space equivalent to this approach, by minimizing a lower bound on the marginal likelihood to learn invariances in neural networks resulting in naturally higher performing models.


page 4

page 5

page 8

page 12


Nested Variational Compression in Deep Gaussian Processes

Deep Gaussian processes provide a flexible approach to probabilistic mod...

Learning Invariances using the Marginal Likelihood

Generalising well in supervised learning tasks relies on correctly extra...

Weight Uncertainty in Neural Networks

We introduce a new, efficient, principled and backpropagation-compatible...

Last Layer Marginal Likelihood for Invariance Learning

Data augmentation is often used to incorporate inductive biases into mod...

Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients

We propose a lower bound on the log marginal likelihood of Gaussian proc...

Monotonicity and Double Descent in Uncertainty Estimation with Gaussian Processes

The quality of many modern machine learning models improves as model com...

Locally Scale-Invariant Convolutional Neural Networks

Convolutional Neural Networks (ConvNets) have shown excellent results on...