A Comprehensive Study on Post-Training Quantization for Large Language Models

03/15/2023
by   Zhewei Yao, et al.
0

Post-training quantization () had been recently shown as a compromising method to reduce memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study of those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bit precision is similar to 5 bits). We also present recommendations about how to utilize quantization for with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2023

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

Several post-training quantization methods have been applied to large la...
research
12/19/2022

The case for 4-bit precision: k-bit Inference Scaling Laws

Quantization methods reduce the number of bits required to represent eac...
research
07/25/2023

QuIP: 2-Bit Quantization of Large Language Models With Guarantees

This work studies post-training parameter quantization in large language...
research
03/10/2022

An Empirical Study of Low Precision Quantization for TinyML

Tiny machine learning (tinyML) has emerged during the past few years aim...
research
05/23/2021

Post-Training Sparsity-Aware Quantization

Quantization is a technique used in deep neural networks (DNNs) to incre...
research
09/09/2021

Fine-grained Data Distribution Alignment for Post-Training Quantization

While post-training quantization receives popularity mostly due to its e...
research
11/13/2022

FullPack: Full Vector Utilization for Sub-Byte Quantized Inference on General Purpose CPUs

Although prior art has demonstrated negligible accuracy drop in sub-byte...

Please sign up or login with your details

Forgot password? Click here to reset