On Optimal Caching and Model Multiplexing for Large Model Inference

by   Banghua Zhu, et al.

Large Language Models (LLMs) and other large foundation models have achieved noteworthy success, but their size exacerbates existing resource consumption and latency challenges. In particular, the large-scale deployment of these models is hindered by the significant resource requirements during inference. In this paper, we study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model multiplexer to choose from an ensemble of models for query processing. Theoretically, we provide an optimal algorithm for jointly optimizing both approaches to reduce the inference cost in both offline and online tabular settings. By combining a caching algorithm, namely Greedy Dual Size with Frequency (GDSF) or Least Expected Cost (LEC), with a model multiplexer, we achieve optimal rates in both offline and online settings. Empirically, simulations show that the combination of our caching and model multiplexing algorithms greatly improves over the baselines, with up to 50× improvement over the baseline when the ratio between the maximum cost and minimum cost is 100. Experiments on real datasets show a 4.3× improvement in FLOPs over the baseline when the ratio for FLOPs is 10, and a 1.8× improvement in latency when the ratio for average latency is 1.85.


page 1

page 2

page 3

page 4


Lower Bounds for Caching with Delayed Hits

Caches are a fundamental component of latency-sensitive computer systems...

RetroRenting: An Online Policy for Service Caching at the Edge

The rapid proliferation of shared edge computing platforms has enabled a...

Keep-Alive Caching for the Hawkes process

We study the design of caching policies in applications such as serverle...

Caching with Reserves

Caching is a crucial component of many computer systems, so naturally it...

Beyond Worst-case Analysis of Multicore Caching Strategies

Every processor with multiple cores sharing a cache needs to implement a...

Accelerating Deep Learning Inference via Freezing

Over the last few years, Deep Neural Networks (DNNs) have become ubiquit...

Practical Bounds on Optimal Caching with Variable Object Sizes

Many recent caching systems aim to improve hit ratios, but there is no g...

Please sign up or login with your details

Forgot password? Click here to reset