Overview
Explore the development and architecture of Microsoft Azure's AI infrastructure in this technical conference talk that examines how advanced GPU accelerators and networking technologies power sophisticated AI models. Gain insights into the H100 architecture, Interlaken protocols, and deployment strategies while learning about system-wide optimizations for virtual machine performance. Delve into Azure's global scale operations, hardware configurations, and AI portfolio through detailed explanations of infrastructure components, firmware implementations, and networking solutions. Understand the challenges and breakthroughs in constructing enterprise-level AI systems, with particular focus on GPU architecture, system optimization, and the evolving AI landscape.
Syllabus
Introduction
Agenda
Azure AI Infrastructure
Our Commitments
Global Scale
Optimized Infrastructure
AI Landscape
AI Portfolio
Hardware Architecture
GPU Architecture
Networking
Firmware
System Architecture
Breakthroughs
Challenges
Taught by
Open Compute Project