Dataflow Aware Mapping of Convolutional Neural Networks Onto Many-Core Platforms With Network-on-Chip Interconnect

06/18/2020
by   Andreas Bytyn, et al.
0

Machine intelligence, especially using convolutional neural networks (CNNs), has become a large area of research over the past years. Increasingly sophisticated hardware accelerators are proposed that exploit e.g. the sparsity in computations and make use of reduced precision arithmetic to scale down the energy consumption. However, future platforms require more than just energy efficiency: Scalability is becoming an increasingly important factor. The required effort for physical implementation grows with the size of the accelerator making it more difficult to meet target constraints. Using many-core platforms consisting of several homogeneous cores can alleviate the aforementioned limitations with regard to physical implementation at the expense of an increased dataflow mapping effort. While the dataflow in CNNs is deterministic and can therefore be optimized offline, the problem of finding a suitable scheme that minimizes both runtime and off-chip memory accesses is a challenging task which becomes even more complex if an interconnect system is involved. This work presents an automated mapping strategy starting at the single-core level with different optimization targets for minimal runtime and minimal off-chip memory accesses. The strategy is then extended towards a suitable many-core mapping scheme and evaluated using a scalable system-level simulation with a network-on-chip interconnect. Design space exploration is performed by mapping the well-known CNNs AlexNet and VGG-16 to platforms of different core counts and computational power per core in order to investigate the trade-offs. Our mapping strategy and system setup is scaled starting from the single core level up to 128 cores, thereby showing the limits of the selected approach.

READ FULL TEXT

page 1

page 7

research
03/05/2018

XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks

Deploying state-of-the-art CNNs requires power-hungry processors and off...
research
02/06/2019

Exploration of Performance and Energy Trade-offs for Heterogeneous Multicore Architectures

Energy-efficiency has become a major challenge in modern computer system...
research
03/20/2017

Formalizing Memory Accesses and Interrupts

The hardware/software boundary in modern heterogeneous multicore compute...
research
06/23/2023

RETROSPECTIVE: Corona: System Implications of Emerging Nanophotonic Technology

The 2008 Corona effort was inspired by a pressing need for more of every...
research
10/16/2018

Morph: Flexible Acceleration for 3D CNN-based Video Understanding

The past several years have seen both an explosion in the use of Convolu...
research
10/27/2021

L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization

Silicon-photonics-based optical neural network (ONN) is a promising hard...
research
01/14/2021

Enabling Large Neural Networks on Tiny Microcontrollers with Swapping

Running neural networks (NNs) on microcontroller units (MCUs) is becomin...

Please sign up or login with your details

Forgot password? Click here to reset