Two Instances of Interpretable Neural Network for Universal Approximations
This paper proposes two bottom-up interpretable neural network (NN) constructions for universal approximation, namely Triangularly-constructed NN (TNN) and Semi-Quantized Activation NN (SQANN). The notable properties are (1) resistance to catastrophic forgetting (2) existence of proof for arbitrarily high accuracies on training dataset (3) for an input x, users can identify specific samples of training data whose activation “fingerprints" are similar to that of x's activations. Users can also identify samples that are out of distribution.
READ FULL TEXT