Duality for Neural Networks through Reproducing Kernel Banach Spaces

11/09/2022
by   Len Spek, et al.
8

Reproducing Kernel Hilbert spaces (RKHS) have been a very successful tool in various areas of machine learning. Recently, Barron spaces have been used to prove bounds on the generalisation error for neural networks. Unfortunately, Barron spaces cannot be understood in terms of RKHS due to the strong nonlinear coupling of the weights. We show that this can be solved by using the more general Reproducing Kernel Banach spaces (RKBS). This class of integral RKBS can be understood as an infinite union of RKHS spaces. As the RKBS is not a Hilbert space, it is not its own dual space. However, we show that its dual space is again an RKBS where the roles of the data and parameters are interchanged, forming an adjoint pair of RKBSs including a reproducing property in the dual space. This allows us to construct the saddle point problem for neural networks, which can be used in the whole field of primal-dual optimisation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian mani...
research
09/20/2021

Understanding neural networks with reproducing kernel Banach spaces

Characterizing the function spaces corresponding to neural networks can ...
research
06/02/2021

Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

Despite their ubiquity in core AI fields like natural language processin...
research
02/18/2016

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity

We develop a general duality between neural networks and compositional k...
research
08/31/2023

Training Neural Networks Using Reproducing Kernel Space Interpolation and Model Reduction

We introduce and study the theory of training neural networks using inte...
research
05/09/2023

A duality framework for generalization analysis of random feature models and two-layer neural networks

We consider the problem of learning functions in the ℱ_p,π and Barron sp...
research
08/27/2021

Learning primal-dual sparse kernel machines

Traditionally, kernel methods rely on the representer theorem which stat...

Please sign up or login with your details

Forgot password? Click here to reset