Rapid Deployment of AI Solutions at Scale - Power, Cooling, and Networking Integration
Open Compute Project via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch a 14-minute conference talk exploring Dell Technologies' approach to deploying AI solutions at scale. Learn how dramatic generational improvements in processor technology and rising computational demands necessitate innovative liquid cooling solutions. Discover the AI factory concept that enables rapid deployment of scalable units for demanding training and inferencing environments. Explore how modular deployment methods create comprehensive solutions addressing compute, networking, storage, and cooling requirements. Understand the implementation of turn-key, room neutral, rack level solutions that come pre-validated and fully integrated to minimize strain on data center cooling equipment. Follow along as Ihab Tarazi details the architecture of scalable units and examines the critical integration of power, cooling, and networking components in building robust AI infrastructure solutions, with specific focus on the XE 9712 system and liquid cooling technologies.
Syllabus
Introduction
Challenges
AI HPC Architecture
XE 9712
Liquid Cooling
Scalable Units
Compute Only
Conclusion
Taught by
Open Compute Project