A Graph Deep Learning Framework for High-Level Synthesis Design Space Exploration

11/29/2021
by   Lorenzo Ferretti, et al.
0

The design of efficient hardware accelerators for high-throughput data-processing applications, e.g., deep neural networks, is a challenging task in computer architecture design. In this regard, High-Level Synthesis (HLS) emerges as a solution for fast prototyping application-specific hardware starting from a behavioural description of the application computational flow. This Design-Space Exploration (DSE) aims at identifying Pareto optimal synthesis configurations whose exhaustive search is often unfeasible due to the design-space dimensionality and the prohibitive computational cost of the synthesis process. Within this framework, we effectively and efficiently address the design problem by proposing, for the first time in the literature, graph neural networks that jointly predict acceleration performance and hardware costs of a synthesized behavioral specification given optimization directives. The learned model can be used to rapidly approach the Pareto curve by guiding the DSE, taking into account performance and cost estimates. The proposed method outperforms traditional HLS-driven DSE approaches, by accounting for arbitrary length of computer programs and the invariant properties of the input. We propose a novel hybrid control and data flow graph representation that enables training the graph neural network on specifications of different hardware accelerators; the methodology naturally transfers to unseen data-processing applications too. Moreover, we show that our approach achieves prediction accuracy comparable with that of commonly used simulators without having access to analytical models of the HLS compiler and the target FPGA, while being orders of magnitude faster. Finally, the learned representation can be exploited for DSE in unexplored configuration spaces by fine-tuning on a small number of samples from the new target domain.

READ FULL TEXT
research
12/18/2019

COSMOS: Coordination of High-Level Synthesis and Memory Optimization for Hardware Accelerators

Hardware accelerators are key to the efficiency and performance of syste...
research
05/18/2021

TRIM: A Design Space Exploration Model for Deep Neural Networks Inference and Training Accelerators

There is increasing demand for specialized hardware for training deep ne...
research
11/17/2021

Enabling Automated FPGA Accelerator Optimization Using Graph Neural Networks

High-level synthesis (HLS) has freed the computer architects from develo...
research
07/14/2018

LeFlow: Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks

Recent work has shown that Field-Programmable Gate Arrays (FPGAs) play a...
research
05/18/2023

ProgSG: Cross-Modality Representation Learning for Programs in Electronic Design Automation

Recent years have witnessed the growing popularity of domain-specific ac...
research
11/20/2021

Accelerating non-LTE synthesis and inversions with graph networks

Context: The computational cost of fast non-LTE synthesis is one of the ...
research
12/01/2020

Toward Accurate Platform-Aware Performance Modeling for Deep Neural Networks

In this paper, we provide a fine-grain machine learning-based method, Pe...

Please sign up or login with your details

Forgot password? Click here to reset