Joint Program and Layout Transformations to enable Convolutional Operators on Specialized Hardware based on Constraint Programming

04/10/2021
by   Dennis Rieber, et al.
0

The success of Deep Artificial Neural Networks (DNNs) in many domains created a rich body of research concerned with hardware accelerators for compute-intensive DNN operators. However, implementing such operators efficiently with complex hardware intrinsics such as matrix multiply is a task not yet automated gracefully. Solving this task often requires joint program and data layout transformations. First solutions to this problem have been proposed, such as TVM, UNIT or ISAMIR, which work on a loop-level representation of operators and specify data layout and possible program transformations before the embedding into the operator is performed. This top-down approach creates a tension between exploration range and search space complexity, especially when also exploring data layout transformations such as im2col, channel packing or padding. In this work, we propose a new approach to this problem. We created a bottom-up method that allows the joint transformation of both compuation and data layout based on the found embedding. By formulating the embedding as a constraint satisfaction problem over the scalar dataflow, every possible embedding solution is contained in the search space. Adding additional constraints and optmization targets to the solver generates the subset of preferable solutions. An evaluation using the VTA hardware accelerator with the Baidu DeepBench inference benchmark shows that our approach can automatically generate code competitive to reference implementations. Further, we show that dynamically determining the data layout based on intrinsic and workload is beneficial for hardware utilization and performance. In cases where the reference implementation has low hardware utilization due to its fixed deployment strategy, we achieve a geomean speedup of up to x2.813, while individual operators can improve as much as x170.

READ FULL TEXT
research
05/19/2021

Pure Tensor Program Rewriting via Access Patterns (Representation Pearl)

Tensor kernels in machine learning (ML) often correspond to pure mathema...
research
05/31/2022

HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness

The process of optimizing the latency of DNN operators with ML models an...
research
01/06/2014

Speeding up SOR Solvers for Constraint-based GUIs with a Warm-Start Strategy

Many computer programs have graphical user interfaces (GUIs), which need...
research
02/11/2022

Learning from distinctive candidates to optimize reduced-precision convolution program on tensor cores

Convolution is one of the fundamental operations of deep neural networks...
research
11/11/2020

DLFusion: An Auto-Tuning Compiler for Layer Fusion on Deep Neural Network Accelerator

Many hardware vendors have introduced specialized deep neural networks (...
research
06/10/2020

OpEvo: An Evolutionary Method for Tensor Operator Optimization

Training and inference efficiency of deep neural networks highly rely on...
research
10/03/2017

Optimal DNN Primitive Selection with Partitioned Boolean Quadratic Programming

Deep Neural Networks (DNNs) require very large amounts of computation bo...

Please sign up or login with your details

Forgot password? Click here to reset