Zebra: Memory Bandwidth Reduction for CNN Accelerators With Zero Block Regularization of Activation Maps

05/02/2022
by   Hsu-Tung Shih, et al.
0

The large amount of memory bandwidth between local buffer and external DRAM has become the speedup bottleneck of CNN hardware accelerators, especially for activation maps. To reduce memory bandwidth, we propose to learn pruning unimportant blocks dynamically with zero block regularization of activation maps (Zebra). This strategy has low computational overhead and could easily integrate with other pruning methods for better performance. The experimental results show that the proposed method can reduce 70% of memory bandwidth for Resnet-18 on Tiny-Imagenet within 1% accuracy drops and 2% accuracy gain with the combination of Network Slimming.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset