On the Viability of using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators

06/12/2023
by   Zheyu Yan, et al.
0

Deep Neural Networks (DNNs) have demonstrated impressive performance across a wide range of tasks. However, deploying DNNs on edge devices poses significant challenges due to stringent power and computational budgets. An effective solution to this issue is software-hardware (SW-HW) co-design, which allows for the tailored creation of DNN models and hardware architectures that optimally utilize available resources. However, SW-HW co-design traditionally suffers from slow optimization speeds because their optimizers do not make use of heuristic knowledge, also known as the “cold start” problem. In this study, we present a novel approach that leverages Large Language Models (LLMs) to address this issue. By utilizing the abundant knowledge of pre-trained LLMs in the co-design optimization process, we effectively bypass the cold start problem, substantially accelerating the design process. The proposed method achieves a significant speedup of 25x. This advancement paves the way for the rapid and efficient deployment of DNNs on edge devices.

READ FULL TEXT
research
10/10/2022

Energy-Efficient Deployment of Machine Learning Workloads on Neuromorphic Hardware

As the technology industry is moving towards implementing tasks such as ...
research
09/13/2019

HERALD: Optimizing Heterogeneous DNN Accelerators for Edge Devices

Recent advances in deep neural networks (DNNs) have made DNNs the backbo...
research
03/14/2023

DeepAxe: A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators

While the role of Deep Neural Networks (DNNs) in a wide range of safety-...
research
09/08/2021

Resistive Neural Hardware Accelerators

Deep Neural Networks (DNNs), as a subset of Machine Learning (ML) techni...
research
04/10/2021

Use of Metamorphic Relations as Knowledge Carriers to Train Deep Neural Networks

Training multiple-layered deep neural networks (DNNs) is difficult. The ...
research
07/29/2023

Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise

Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, i...
research
05/17/2019

EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices

In recent years, advances in deep learning have resulted in unprecedente...

Please sign up or login with your details

Forgot password? Click here to reset