Neural networks behave as hash encoders: An empirical study

01/14/2021
by   Fengxiang He, et al.
20

The input space of a neural network with ReLU-like activations is partitioned into multiple linear regions, each corresponding to a specific activation pattern of the included ReLU-like activations. We demonstrate that this partition exhibits the following encoding properties across a variety of deep learning models: (1) determinism: almost every linear region contains at most one training example. We can therefore represent almost every training example by a unique activation pattern, which is parameterized by a neural code; and (2) categorization: according to the neural code, simple algorithms, such as K-Means, K-NN, and logistic regression, can achieve fairly good performance on both training and test data. These encoding properties surprisingly suggest that normal neural networks well-trained for classification behave as hash encoders without any extra efforts. In addition, the encoding properties exhibit variability in different scenarios. Further experiments demonstrate that model size, training time, training sample size, regularization, and label noise contribute in shaping the encoding properties, while the impacts of the first three are dominant. We then define an activation hash phase chart to represent the space expanded by model size, training time, training sample size, and the encoding properties, which is divided into three canonical regions: under-expressive regime, critically-expressive regime, and sufficiently-expressive regime. The source code package is available at <https://github.com/LeavesLei/activation-code>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset