FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

12/01/2016
by   Yaman Umuroglu, et al.
0

Research has shown that convolutional neural networks contain significant redundancy, and high classification accuracy can be obtained even when weights and activations are reduced from floating point to binary values. In this paper, we present FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture. By utilizing a novel set of optimizations that enable efficient mapping of binarized neural networks to hardware, we implement fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. On a ZC706 embedded FPGA platform drawing less than 25 W total system power, we demonstrate up to 12.3 million image classifications per second with 0.31 μs latency on the MNIST dataset with 95.8 283 μs latency on the CIFAR-10 and SVHN datasets with respectively 80.1 and 94.9 classification rates reported to date on these benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset