Batch Prompting: Efficient Inference with Large Language Model APIs

01/19/2023
by   Zhoujun Cheng, et al.
0

Performing inference on hundreds of thousands of samples with large language models (LLMs) can be computationally and financially costly. We propose batch prompting, a simple alternative prompting approach that enables the LLM to run inference in batches, instead of one sample at a time. Our method reduces both token and time costs while retaining downstream performance. We theoretically demonstrate that under a few-shot in-context learning setting, the inference costs decrease almost inverse linearly with the number of samples in each batch. We extensively validate the effectiveness of batch prompting on ten datasets across commonsense QA, arithmetic reasoning, and NLI/NLU: batch prompting significantly (up to 5× with six samples in batch) reduces the LLM (Codex) inference token and time costs while achieving better or comparable performance. Our analysis shows that the number of samples in each batch and the complexity of tasks affect its performance. Further, batch prompting can be applied across different LLMs and reasoning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2023

Accelerating LLM Inference with Staged Speculative Decoding

Recent advances with large language models (LLM) illustrate their divers...
research
12/20/2022

Efficient L2 Batch Posting Strategy on L1

We design efficient algorithms for the batch posting of Layer 2 chain ca...
research
05/19/2023

Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs

A popular approach for improving the correctness of output from large la...
research
09/01/2023

BatchPrompt: Accomplish more with less

As the ever-increasing token limits of large language models (LLMs) have...
research
07/05/2023

SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference

Autoregressive large language models (LLMs) have made remarkable progres...
research
05/24/2023

Free Lunch for Efficient Textual Commonsense Integration in Language Models

Recent years have witnessed the emergence of textual commonsense knowled...
research
05/17/2021

Rethinking "Batch" in BatchNorm

BatchNorm is a critical building block in modern convolutional neural ne...

Please sign up or login with your details

Forgot password? Click here to reset