A mechanism for balancing accuracy and scope in cross-machine black-box GPU performance modeling

04/21/2019
by   James D. Stevens, et al.
0

The ability to model, analyze, and predict execution time of computations is an important building block supporting numerous efforts, such as load balancing, performance optimization, and automated performance tuning for high performance, parallel applications. In today's increasingly heterogeneous computing environment, this task must be accomplished efficiently across multiple architectures, including massively parallel coprocessors like GPUs. To address this challenge, we present an approach for constructing customizable, cross-machine performance models for GPU kernels, including a mechanism to automatically and symbolically gather performance-relevant kernel operation counts, a tool for formulating mathematical models using these counts, and a customizable parameterized collection of benchmark kernels used to fit models to GPUs in a black-box fashion. Our approach empowers a user to manage trade-offs between model accuracy, evaluation speed, and generalizability. A user can define a model and customize the fitting process, making it as simple or complex as desired, and as application-targeted or general-purpose as desired. As application examples of our approach, we demonstrate both linear and nonlinear models; each example models execution times for multiple variants of a particular computation: two matrix multiplication variants, four Discontinuous Galerkin (DG) differentiation operation variants, and two 2-D first-order finite difference computation variants. For each variant, we present accuracy results on GPUs from multiple vendors and hardware generations. We view this customizable approach as a response to a central question in GPU performance modeling: how can we model GPU performance in a cost-explanatory fashion while maintaining accuracy, evaluation speed, portability, and ease of use, an attribute we believe precludes manual collection of kernel or hardware statistics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2020

GEVO: GPU Code Optimization using EvolutionaryComputation

GPUs are a key enabler of the revolution in machine learning and high pe...
research
04/17/2020

GEVO: GPU Code Optimization using Evolutionary Computation

GPUs are a key enabler of the revolution in machine learning and high pe...
research
10/04/2022

Benchmarking optimization algorithms for auto-tuning GPU kernels

Recent years have witnessed phenomenal growth in the application, and ca...
research
01/20/2020

A Simple Model for Portable and Fast Prediction of Execution Time and Power Consumption of GPU Kernels

Characterizing compute kernel execution behavior on GPUs for efficient t...
research
10/15/2021

Metrics and Design of an Instruction Roofline Model for AMD GPUs

Due to the recent announcement of the Frontier supercomputer, many scien...
research
09/13/2021

Serial and parallel kernelization of Multiple Hitting Set parameterized by the Dilworth number, implemented on the GPU

The NP-hard Multiple Hitting Set problem is finding a minimum-cardinalit...
research
03/01/2021

Accelerating Distributed-Memory Autotuning via Statistical Analysis of Execution Paths

The prohibitive expense of automatic performance tuning at scale has lar...

Please sign up or login with your details

Forgot password? Click here to reset