Hardware Synthesis of State-Space Equations; Application to FPGA Implementation of Shallow and Deep Neural Networks
Nowadays, shallow and deep Neural Networks (NNs) have vast applications including biomedical engineering, image processing, computer vision, and speech recognition. Many researchers have developed hardware accelerators including field-programmable gate arrays (FPGAs) for implementing high-performance and energy efficient NNs. Apparently, the hardware architecture design process is specific and time-consuming for each NN. Therefore, a systematic way to design, implement and optimize NNs is highly demanded. The paper presents a systematic approach to implement state-space models in register transfer level (RTL), with special interest for NN implementation. The proposed design flow is based on the iterative nature of state-space models and the analogy between state-space formulations and finite-state machines. The method can be used in linear/nonlinear and time-varying/time-invariant systems. It can also be used to implement either intrinsically iterative systems (widely used in various domains such as signal processing, numerical analysis, computer arithmetic, and control engineering), or systems that could be rewritten in equivalent iterative forms. The implementation of recurrent NNs such as long short-term memory (LSTM) NNs, which have intrinsic state-space forms, are another major applications for this framework. As a case study, it is shown that state-space systems can be used for the systematic implementation and optimization of NNs (as nonlinear and time-varying dynamic systems). An RTL code generating software is also provided online, which simplifies the automatic generation of NNs of arbitrary size.
READ FULL TEXT