Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale

05/26/2021
by   Zhaoxia, et al.
0

Tremendous success of machine learning (ML) and the unabated growth in ML model complexity motivated many ML-specific designs in both CPU and accelerator architectures to speed up the model inference. While these architectures are diverse, highly optimized low-precision arithmetic is a component shared by most. Impressive compute throughputs are indeed often exhibited by these architectures on benchmark ML models. Nevertheless, production models such as recommendation systems important to Facebook's personalization services are demanding and complex: These systems must serve billions of users per month responsively with low latency while maintaining high prediction accuracy, notwithstanding computations with many tens of billions parameters per inference. Do these low-precision architectures work well with our production recommendation systems? They do. But not without significant effort. We share in this paper our search strategies to adapt reference recommendation models to low-precision hardware, our optimization of low-precision compute kernels, and the design and development of tool chain so as to maintain our models' accuracy throughout their lifespan during which topic trends and users' interests inevitably evolve. Practicing these low-precision technologies helped us save datacenter capacities while deploying models with up to 5X complexity that would otherwise not be deployed on traditional general-purpose CPUs. We believe these lessons from the trenches promote better co-design between hardware architecture and software engineering and advance the state of the art of ML in industry.

READ FULL TEXT

page 1

page 6

research
03/01/2018

WRPN & Apprentice: Methods for Training and Inference using Low-Precision Numerics

Today's high performance deep learning architectures involve large model...
research
06/21/2023

Subgraph Stationary Hardware-Software Inference Co-Design

A growing number of applications depend on Machine Learning (ML) functio...
research
07/08/2021

First-Generation Inference Accelerator Deployment at Facebook

In this paper, we provide a deep dive into the deployment of inference a...
research
03/11/2022

Cross-Layer Approximation For Printed Machine Learning Circuits

Printed electronics (PE) feature low non-recurring engineering costs and...
research
12/02/2022

DisaggRec: Architecting Disaggregated Systems for Large-Scale Personalized Recommendation

Deep learning-based personalized recommendation systems are widely used ...
research
02/27/2021

ProbLP: A framework for low-precision probabilistic inference

Bayesian reasoning is a powerful mechanism for probabilistic inference i...
research
10/29/2022

MinUn: Accurate ML Inference on Microcontrollers

Running machine learning inference on tiny devices, known as TinyML, is ...

Please sign up or login with your details

Forgot password? Click here to reset