Generalisation and the Risk–Entropy Curve

02/15/2022
by   Dominic Belcher, et al.
0

In this paper we show that the expected generalisation performance of a learning machine is determined by the distribution of risks or equivalently its logarithm – a quantity we term the risk entropy – and the fluctuations in a quantity we call the training ratio. We show that the risk entropy can be empirically inferred for deep neural network models using Markov Chain Monte Carlo techniques. Results are presented for different deep neural networks on a variety of problems. The asymptotic behaviour of the risk entropy acts in an analogous way to the capacity of the learning machine, but the generalisation performance experienced in practical situations is determined by the behaviour of the risk entropy before the asymptotic regime is reached. This performance is strongly dependent on the distribution of the data (features and targets) and not just on the capacity of the learning machine.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset