Overview
Dive into an in-depth interview with AI acceleration expert Adi Fuchs, exploring the landscape of modern AI acceleration technology. Gain insights into the success of GPUs, the concept of "dark silicon," and emerging technologies beyond traditional accelerators. Explore systolic arrays, VLIW, reconfigurable dataflow hardware, near-memory computing, optical and neuromorphic computing, and their impact on AI development. Understand how hardware acts as both an enabler and limiter in AI progress, and discover resources for further exploration of this rapidly evolving field.
Syllabus
- Intro
- What does it mean to make hardware for AI?
- Why were GPUs so successful?
- What is "dark silicon"?
- Beyond GPUs: How can we get even faster AI compute?
- A look at today's accelerator landscape
- Systolic Arrays and VLIW
- Reconfigurable dataflow hardware
- The failure of Wave Computing
- What is near-memory compute?
- Optical and Neuromorphic Computing
- Hardware as enabler and limiter
- Everything old is new again
- Where to go to dive deeper?
Taught by
Yannic Kilcher