Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

All About AI Accelerators - GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & More

Yannic Kilcher via YouTube

Overview

Dive into an in-depth interview with AI acceleration expert Adi Fuchs, exploring the landscape of modern AI acceleration technology. Gain insights into the success of GPUs, the concept of "dark silicon," and emerging technologies beyond traditional accelerators. Explore systolic arrays, VLIW, reconfigurable dataflow hardware, near-memory computing, optical and neuromorphic computing, and their impact on AI development. Understand how hardware acts as both an enabler and limiter in AI progress, and discover resources for further exploration of this rapidly evolving field.

Syllabus

- Intro
- What does it mean to make hardware for AI?
- Why were GPUs so successful?
- What is "dark silicon"?
- Beyond GPUs: How can we get even faster AI compute?
- A look at today's accelerator landscape
- Systolic Arrays and VLIW
- Reconfigurable dataflow hardware
- The failure of Wave Computing
- What is near-memory compute?
- Optical and Neuromorphic Computing
- Hardware as enabler and limiter
- Everything old is new again
- Where to go to dive deeper?

Taught by

Yannic Kilcher

Reviews

Start your review of All About AI Accelerators - GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & More

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.