Caching with Time Domain Buffer Sharing

by   Wei Chen, et al.

In this paper, storage efficient caching based on time domain buffer sharing is considered. The caching policy allows a user to determine whether and how long it should cache a content item according to the prediction of its random request time, also referred to as the request delay information (RDI). In particular, the aim is to maximize the caching gain for communications while limiting its storage cost. To achieve this goal, a queueing theoretic model for caching with infinite buffers is first formulated, in which Little's law is adopted to obtain the tradeoff between the hit ratio and the average buffer consumption. When there exist multiple content classes with different RDIs, the storage efficiency is further optimized by carefully allocating the storage cost. For more practical finite-buffer caching, a G/GI/L/0 queue model is formulated, in which a diffusion approximation and Erlang-B formula are adopted to determine the buffer overflow probability and the corresponding hit ratio. The optimal hit ratio is shown to be limited by the demand probability and buffer size for large and small buffers respectively. In practice, a user may exploit probabilistic caching with random maximum caching time and arithmetic caching without any need for content arrival statistics to efficiently harvest content files in air.


page 1

page 2

page 3

page 4


Hit Ratio Driven Mobile Edge Caching Scheme for Video on Demand Services

More and more scholars focus on mobile edge computing (MEC) technology, ...

Cache Placement in Two-Tier HetNets with Limited Storage Capacity: Cache or Buffer?

In this paper, we aim to minimize the average file transmission delay vi...

Exploiting Tradeoff Between Transmission Diversity and Content Diversity in Multi-Cell Edge Caching

Caching in multi-cell networks faces a well-known dilemma, i.e., to cach...

Caching under Content Freshness Constraints

Several real-time delay-sensitive applications pose varying degrees of f...

On Optimal Proactive and Retention-Aware Caching with User Mobility

Caching popular contents at edge devices is an effective solution to all...

Soft-TTL: Time-Varying Fractional Caching

Standard Time-to-Live (TTL) cache management prescribes the storage of e...

Optimizing Replacement Policies for Content Delivery Network Caching: Beyond Belady to Attain A Seemingly Unattainable Byte Miss Ratio

When facing objects/files of differing sizes in content delivery network...

Please sign up or login with your details

Forgot password? Click here to reset