Computing & Signal Processing

Deep neural networks (DNNs) have made significant technical progress in many application domains, such as image classification and speech recognition. Because of the data movement between computing elements and memory, known as the memory wall problem, the processing of DNNs consumes massive computational resources and high power.


To overcome the memory wall problem, computation-in-memory (CIM) has been drawing huge attention as a promising solution by minimizing data movement between the memory system and computation units. The CIM architecture processes the computations in the memory system without moving data. For most NN-based AI applications, this memory-centric architecture saves energy consumption by reducing the data movement energy.

Computation-in-memory architecture and memory devices

SRAM-based Hybrid Computation-in-memory Macro

Chip Die Micrograph and Measurement Setup