Recent advancements in edge computing devices have gained significant popularity due to their low latency and cost-effectiveness. These devices play a crucial role in applications such as real-time monitoring, autonomous systems, and IoT, where processing data locally at the edge reduces the need for constant communication with cloud servers. However, the limited computing power of edge devices constrains their energy efficiency, necessitating more efficient processor designs at the edge. To address this issue, Processing-in-Memory (PIM) architectures have been proposed, integrating computation logic directly within the memory to reduce data movement from the CPU. This research interests includes develops in various memory circuit designs including, SRAM, embedded-Flash memory, MRAM, and etc. for various applications including, inference, training, verification and etc.
Memory Computing Macro ICs
Our research group has focused on low-power computing-in-memory (CIM) designs that integrate both memory and processing within a unified architecture. We have explored a wide range of memory technologies, including SRAM and embedded Flash, to support diverse application requirements. Additionally, we utilize in-memory computing techniques that leverage analog-digital converters (ADC) and time-to-digital converters (TDC) for efficient data processing. These techniques aim to reduce energy consumption while enabling high-throughput computing at the edge.
Domain-Specific Accelerator ICs
Our research group has developed application-specific accelerator ICs tailored for biomedical and AI-driven tasks. These include spike sorting classifiers for neural signal processing, seizure detection and classification systems for healthcare monitoring, and neural network verification engines for evaluating the robustness of neural networks. By optimizing both algorithm and circuit design, we aim to achieve high accuracy with low power consumption.
Computing Architecture
Implementation of specific circuit techniques can significantly enhance both the energy efficiency and computational performance of edge systems. Beyond low-level hardware optimizations, architectural approaches such as software-hardware co-design play a critical role in overall system improvement. This research explores algorithm-hardware co-design strategies that align the computational model with underlying hardware constraints to improve system efficiency.