Deep Learning on FPGAs: Past, Present, and Future

02/13/2016
by   Griffin Lacey, et al.
0

The rapid growth of data size and accessibility in recent years has instigated a shift of philosophy in algorithm design for artificial intelligence. Instead of engineering algorithms by hand, the ability to learn composable systems automatically from massive amounts of data has led to ground-breaking performance in important domains such as computer vision, speech recognition, and natural language processing. The most popular class of techniques used in these domains is called deep learning, and is seeing significant attention from industry. However, these models require incredible amounts of data and compute power to train, and are limited by the need for better hardware acceleration to accommodate scaling beyond current data and model sizes. While the current solution has been to use clusters of graphics processing units (GPU) as general purpose processors (GPGPU), the use of field programmable gate arrays (FPGA) provide an interesting alternative. Current trends in design tools for FPGAs have made them more compatible with the high-level software practices typically practiced in the deep learning community, making FPGAs more accessible to those who build and deploy models. Since FPGA architectures are flexible, this could also allow researchers the ability to explore model-level optimizations beyond what is possible on fixed architectures such as GPUs. As well, FPGAs tend to provide high performance per watt of power consumption, which is of particular importance for application scientists interested in large scale server-based deployment or resource-limited embedded applications. This review takes a look at deep learning and FPGAs from a hardware acceleration perspective, identifying trends and innovations that make these technologies a natural fit, and motivates a discussion on how FPGAs may best serve the needs of the deep learning community moving forward.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2019

FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review

Due to recent advances in digital technologies, and availability of cred...
research
10/26/2017

The implementation of a Deep Recurrent Neural Network Language Model on a Xilinx FPGA

Recently, FPGA has been increasingly applied to problems such as speech ...
research
01/05/2021

A Survey on Silicon Photonics for Deep Learning

Deep learning has led to unprecedented successes in solving some very di...
research
10/06/2017

FPGA based Parallelized Architecture of Efficient Graph based Image Segmentation Algorithm

Efficient and real time segmentation of color images has a variety of im...
research
07/15/2020

Non-Relational Databases on FPGAs: Survey, Design Decisions, Challenges

Non-relational database systems (NRDS), such as graph, document, key-val...
research
02/08/2019

Software-Defined FPGA Accelerator Design for Mobile Deep Learning Applications

Recently, the field of deep learning has received great attention by the...
research
07/20/2020

GPU coprocessors as a service for deep learning inference in high energy physics

In the next decade, the demands for computing in large scientific experi...

Please sign up or login with your details

Forgot password? Click here to reset