Overview
Explore a 19-minute conference talk where NVIDIA's Chief Platform Architect Rob Ober and Google's Principal Engineer Madhusudan Iyengar delve into the escalating demands AI places on data center infrastructure. Learn about the architectural considerations behind advanced AI systems like NVIDIA's Grace Blackwell 200, and discover how future AI generations will require unprecedented power scaling at the rack level, potentially exceeding 500 kilowatts. Gain valuable insights into the power technologies that machine learning hardware consumers need to prepare for as AI computing demands continue to grow exponentially in density and resource requirements.
Syllabus
The Exponential Demands AI Places on the Rack & Datacenter
Taught by
Open Compute Project