Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

TinyML Talks - SRAM Based In-Memory Computing for Energy-Efficient AI Inference

tinyML via YouTube

Overview

Explore SRAM-based in-memory computing for energy-efficient AI inference in this tinyML talk. Delve into recent silicon demonstrations, innovative memory bitcell circuits, peripheral circuits, and architectures designed to improve upon conventional row-by-row memory operations. Learn about a modeling framework for design parameter optimization and discover how these advancements address limitations in memory access and footprint for low-power AI processors. Gain insights into analog computation inside memory arrays, ADC optimization, programmable IMC accelerators, and noise-aware training and inference techniques. The talk also covers topics such as black-box adversarial input attacks and pruning of crossbar-based IMC hardware, providing a comprehensive overview of cutting-edge developments in energy-efficient AI inference.

Syllabus

Intro
ML collaboration with
Success of Deep Learning / AI
AI Algorithm & Edge Hardware
Typical DNN Accelerators
Eyeriss (JSSC 2017)
MCM Accelerator (JSSC 2020)
Bottleneck of All-Digital DNN HW Energy/Power
In-Memory Computing for DNNS
Analog IMC for SRAM Column
Analog SRAM IMC - Resistive
Analog SRAM IMC - Capacitive
ADC Optimization for IMC
Proposed IMC SRAM Macro Prototypes
Going Beyond IMC Macro Design
PIMCA: Programmable IMC Accelerator
IMC Modeling Framework
IMC HW Noise-Aware Training & Inference
Black-box Adversarial Input Attack
Pruning of Crossbar-based IMC Hardware
Acknowledgements
Contact Information

Taught by

tinyML

Reviews

Start your review of TinyML Talks - SRAM Based In-Memory Computing for Energy-Efficient AI Inference

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.