Applications of Koopman Mode Analysis to Neural Networks

by   Iva Manojlović, et al.

We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space. Each epoch is an application of the map induced by the optimization algorithm and the loss function. Using this induced map, we can apply observables on the weight space and measure their evolution. The evolution of the observables are given by the Koopman operator associated with the induced dynamical system. We use the spectrum and modes of the Koopman operator to realize the above objectives. Our methods can help to, a priori, determine the network depth; determine if we have a bad initialization of the network weights, allowing a restart before training too long; speeding up the training time. Additionally, our methods help enable noise rejection and improve robustness. We show how the Koopman spectrum can be used to determine the number of layers required for the architecture. Additionally, we show how we can elucidate the convergence versus non-convergence of the training process by monitoring the spectrum, in particular, how the existence of eigenvalues clustering around 1 determines when to terminate the learning process. We also show how using Koopman modes we can selectively prune the network to speed up the training procedure. Finally, we show that incorporating loss functions based on negative Sobolev norms can allow for the reconstruction of a multi-scale signal polluted by very large amounts of noise.


Representative Datasets: The Perceptron Case

One of the main drawbacks of the practical use of neural networks is the...

The weight spectrum of several families of Reed-Muller codes

We determine the weight spectrum of RM(m-3,m) for m≥ 6, of RM(m-4,m) for...

Population-Based Training for Loss Function Optimization

Metalearning of deep neural network (DNN) architectures and hyperparamet...

Tangent-Space Regularization for Neural-Network Models of Dynamical Systems

This work introduces the concept of tangent space regularization for neu...

LCA: Loss Change Allocation for Neural Network Training

Neural networks enjoy widespread use, but many aspects of their training...

Geometric Considerations of a Good Dictionary for Koopman Analysis of Dynamical Systems

Representation of a dynamical system in terms of simplifying modes is a ...

Improving Estimation of the Koopman Operator with Kolmogorov-Smirnov Indicator Functions

It has become common to perform kinetic analysis using approximate Koopm...

Please sign up or login with your details

Forgot password? Click here to reset