Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore in-memory computing's potential and challenges in this 36-minute conference talk from the tinyML Summit 2019. Delve into Prof. Naveen Verma's insights on the memory wall, data movement amortization, and IMC as a spatial architecture. Examine current IMC standings, including analog computation, algorithmic co-design, programmability, and efficient application mappings. Discover the path forward with charge-domain analog computing, featuring a 2.4Mb, 64-tile IMC system. Learn about programmable IMC, bit-scalable mixed-signal compute, development boards, and design flows. Witness demonstrations and gain valuable conclusions on the future of in-memory computing in tiny machine learning applications.
Syllabus
Intro
The memory wall Separating memory from compute fundamentally raises a communicatio
So, we should amortize data movement
In-memory computing (IMC)
The basic tradeoffs
IMC as a spatial architecture
Where does IMC stand today?
analog computation
Algorithmic co-design(?)
programmability
efficient application mappings
Path forward: charge-domain analog computin
2.4Mb, 64-tile IMC
Programmable IMC
Bit-scalable mixed-signal compute
Development board
Design flow
Demonstrations
Conclusions
Taught by
tinyML