Enabling the Adoption of Processing-in-Memory: Challenges, Mechanisms, Future Research Directions

by   Saugata Ghose, et al.

Poor DRAM technology scaling over the course of many years has caused DRAM-based main memory to increasingly become a larger system bottleneck. A major reason for the bottleneck is that data stored within DRAM must be moved across a pin-limited memory channel to the CPU before any computation can take place. This requires a high latency and energy overhead, and the data often cannot benefit from caching in the CPU, making it difficult to amortize the overhead. Modern 3D-stacked DRAM architectures include a logic layer, where compute logic can be integrated underneath multiple layers of DRAM cell arrays within the same chip. Architects can take advantage of the logic layer to perform processing-in-memory (PIM), or near-data processing. In a PIM architecture, the logic layer within DRAM has access to the high internal bandwidth available within 3D-stacked DRAM (which is much greater than the bandwidth available between DRAM and the CPU). Thus, PIM architectures can effectively free up valuable memory channel bandwidth while reducing system energy consumption. A number of important issues arise when we add compute logic to DRAM. In particular, the logic does not have low-latency access to common CPU structures that are essential for modern application execution, such as the virtual memory and cache coherence mechanisms. To ease the widespread adoption of PIM, we ideally would like to maintain traditional virtual memory abstractions and the shared memory programming model. This requires efficient mechanisms that can provide logic in DRAM with access to CPU structures without having to communicate frequently with the CPU. To this end, we propose and evaluate two general-purpose solutions that minimize unnecessary off-chip communication for PIM architectures. We show that both mechanisms improve the performance and energy consumption of many important memory-intensive applications.


page 4

page 10

page 11

page 20

page 21

page 24

page 26

page 28


A Modern Primer on Processing in Memory

Modern computing systems are overwhelmingly designed to move data to com...

Processing Data Where It Makes Sense: Enabling In-Memory Computation

Today's systems are overwhelmingly designed to move data to computation....

LazyPIM: Efficient Support for Cache Coherence in Processing-in-Memory Architectures

Processing-in-memory (PIM) architectures have seen an increase in popula...

Understanding Power Consumption and Reliability of High-Bandwidth Memory with Voltage Underscaling

Modern computing devices employ High-Bandwidth Memory (HBM) to meet thei...

Communication-avoiding micro-architecture to compute Xcorr scores for peptide identification

Database algorithms play a crucial part in systems biology studies by id...

Accelerating key bioinformatics tasks 100-fold by improving memory access

Most experimental sciences now rely on computing, and biological science...

Demystifying the Characteristics of 3D-Stacked Memories: A Case Study for Hybrid Memory Cube

Three-dimensional (3D)-stacking technology, which enables the integratio...

Please sign up or login with your details

Forgot password? Click here to reset