ProgSG: Cross-Modality Representation Learning for Programs in Electronic Design Automation

05/18/2023
by   Yunsheng Bai, et al.
0

Recent years have witnessed the growing popularity of domain-specific accelerators (DSAs), such as Google's TPUs, for accelerating various applications such as deep learning, search, autonomous driving, etc. To facilitate DSA designs, high-level synthesis (HLS) is used, which allows a developer to compile a high-level description in the form of software code in C and C++ into a design in low-level hardware description languages (such as VHDL or Verilog) and eventually synthesized into a DSA on an ASIC (application-specific integrated circuit) or FPGA (field-programmable gate arrays). However, existing HLS tools still require microarchitecture decisions, expressed in terms of pragmas (such as directives for parallelization and pipelining). To enable more people to design DSAs, it is desirable to automate such decisions with the help of deep learning for predicting the quality of HLS designs. This requires us a deeper understanding of the program, which is a combination of original code and pragmas. Naturally, these programs can be considered as sequence data, for which large language models (LLM) can help. In addition, these programs can be compiled and converted into a control data flow graph (CDFG), and the compiler also provides fine-grained alignment between the code tokens and the CDFG nodes. However, existing works either fail to leverage both modalities or combine the two in shallow or coarse ways. We propose ProgSG allowing the source code sequence modality and the graph modalities to interact with each other in a deep and fine-grained way. To alleviate the scarcity of labeled designs, a pre-training method is proposed based on a suite of compiler's data flow analysis tasks. Experimental results on two benchmark datasets show the superiority of ProgSG over baseline methods that either only consider one modality or combine the two without utilizing the alignment information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2023

SEER: Super-Optimization Explorer for HLS using E-graph Rewriting with MLIR

High-level synthesis (HLS) is a process that automatically translates a ...
research
09/23/2020

Extending High-Level Synthesis for Task-Parallel Programs

C/C++/OpenCL-based high-level synthesis (HLS) becomes more and more popu...
research
05/04/2022

CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training

Recent years have witnessed increasing interest in code representation l...
research
09/19/2022

Temporal Vectorization: A Compiler Approach to Automatic Multi-Pumping

The multi-pumping resource sharing technique can overcome the limitation...
research
11/29/2021

A Graph Deep Learning Framework for High-Level Synthesis Design Space Exploration

The design of efficient hardware accelerators for high-throughput data-p...
research
11/01/2021

FuCE: Fuzzing+Concolic Execution guided Trojan Detection in Synthesizable Hardware Designs

High-level synthesis (HLS) is the next emerging trend for designing comp...
research
03/02/2019

Agile Network Access Control in the Container Age

Linux Containers, such as those managed by Docker, are an increasingly p...

Please sign up or login with your details

Forgot password? Click here to reset