tinyML Talks: SRAM based In-Memory Computing for Energy-Efficient AI Inference

Описание к видео tinyML Talks: SRAM based In-Memory Computing for Energy-Efficient AI Inference

tinyML Talks recorded May 13, 2021
"SRAM based In-Memory Computing for Energy-Efficient AI Inference"
Jae-sun Seo
Associate Professor
School of ECEE, Arizona State University

Artificial intelligence (AI) and deep learning have been successful across many practical applications, but state-of-the-art algorithms require enormous amount of computation, memory, and on-/off-chip communication. To bring expensive algorithms to a low-power processor, a number of digital CMOS ASIC solutions have been previously proposed, but limitations still exist on memory access and footprint.

To improve upon the conventional row-by-row operation of memories, “in-memory computing” designs have been proposed, which performs analog computation inside memory arrays by asserting multiple or all rows simultaneously. This talk will present recent silicon demonstrations of SRAM-based in-memory computing for AI systems. New memory bitcell circuits, peripheral circuits, architectures, and a modeling framework for design parameter optimization will be discussed.

Комментарии

Информация по комментариям в разработке