Like What You Like: Knowledge Distill via Neuron Selectivity Transfer

07/05/2017
by   Zehao Huang, et al.
0

Despite deep neural networks have demonstrated extraordinary power in various applications, their superior performances are at expenses of high storage and computational cost. Consequently, the acceleration and compression of neural networks have attracted much attention recently. Knowledge Transfer (KT), which aims at training a smaller student network by transferring knowledge from a larger teacher model, is one of the popular solutions. In this paper, we propose a novel knowledge transfer method by treating it as a distribution matching problem. Particularly, we match the distributions of neuron selectivity patterns between teacher and student networks. To achieve this goal, we devise a new KT loss function by minimizing the Maximum Mean Discrepancy (MMD) metric between these distributions. Combined with the original loss function, our method can significantly improve the performance of student networks. We validate the effectiveness of our method across several datasets, and further combine it with other KT methods to explore the best possible results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset