The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos?

10/11/2019
by   Gege Zhang, et al.
0

Expressivity is one of the most significant issues in assessing neural networks. In this paper, we provide a quantitative analysis of the expressivity from dynamic models, where Hilbert space is employed to analyze its convergence and criticality. From the feature mapping of several widely used activation functions made by Hermite polynomials, We found sharp declines or even saddle points in the feature space, which stagnate the information transfer in deep neural networks, then present an activation function design based on the Hermite polynomials for better utilization of spatial representation. Moreover, we analyze the information transfer of deep neural networks, emphasizing the convergence problem caused by the mismatch between input and topological structure. We also study the effects of input perturbations and regularization operators on critical expressivity. Finally, we verified the proposed method by multivariate time series prediction. The results show that the optimized DeepESN provides higher predictive performance, especially for long-term prediction. Our theoretical analysis reveals that deep neural networks use spatial domains for information representation and evolve to the edge of chaos as depth increases. In actual training, whether a particular network can ultimately arrive that depends on its ability to overcome convergence and pass information to the required network depth.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset