FPGA Based Accelerator for Neural Networks Computation with Flexible Pipelining

12/28/2021
by   Qingyang Yi, et al.
0

FPGA is appropriate for fix-point neural networks computing due to high power efficiency and configurability. However, its design must be intensively refined to achieve high performance using limited hardware resources. We present an FPGA-based neural networks accelerator and its optimization framework, which can achieve optimal efficiency for various CNN models and FPGA resources. Targeting high throughput, we adopt layer-wise pipeline architecture for higher DSP utilization. To get the optimal performance, a flexible algorithm to allocate balanced hardware resources to each layer is also proposed, supported by activation buffer design. Through our well-balanced implementation of four CNN models on ZC706, the DSP utilization and efficiency are over 90 on ZC706, the proposed accelerator achieves the performance of 2.58x, 1.53x and 1.35x better than the referenced non-pipeline architecture [1], pipeline architecture [2] and [3], respectively.

READ FULL TEXT
research
04/08/2020

HybridDNN: A Framework for High-Performance Hybrid DNN Accelerator Design and Implementation

To speedup Deep Neural Networks (DNN) accelerator design and enable effe...
research
02/02/2018

VIBNN: Hardware Acceleration of Bayesian Neural Networks

Bayesian Neural Networks (BNNs) have been proposed to address the proble...
research
12/17/2020

A fully pipelined FPGA accelerator for scale invariant feature transform keypoint descriptor matching,

The scale invariant feature transform (SIFT) algorithm is considered a c...
research
11/04/2022

An Efficient FPGA-based Accelerator for Deep Forest

Deep Forest is a prominent machine learning algorithm known for its high...
research
12/15/2021

N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores

Accelerating the neural network inference by FPGA has emerged as a popul...
research
08/29/2019

High Performance Scalable FPGA Accelerator for Deep Neural Networks

Low-precision is the first order knob for achieving higher Artificial In...
research
07/20/2020

HPIPE: Heterogeneous Layer-Pipelined and Sparse-Aware CNN Inference for FPGAs

We present both a novel Convolutional Neural Network (CNN) accelerator a...

Please sign up or login with your details

Forgot password? Click here to reset