Explore the challenges and solutions for optimizing multi-node AI workloads in multi-tenant cloud environments in this 39-minute conference talk by Girish Moodalbail and Leonid Grossman from NVIDIA. Delve into the critical aspects of efficient bandwidth utilization, low latency, and minimal jitter in AI workloads to prevent GPU underutilization. Understand the importance of network isolation and resource management in AI Cloud infrastructure to accommodate multiple users and concurrent workloads. Learn about innovative approaches to achieve network isolation through overlay virtual network topology and efficient bandwidth allocation using end-to-end QoS. Discover how Open Source SDN solutions, including Open vSwitch (OVS), Open Virtual Network (OVN), and OVN-Kubernetes CNI, can be leveraged to address these challenges. Gain insights into the significant performance improvements that can be achieved with OVS-offloadable hardware in scalable multi-node AI workload scenarios.
Overview
Syllabus
Scalable Multi-Node AI Workloads in Multi-Tenant AI Clouds U...- Girish Moodalbail & Leonid Grossman
Taught by
Linux Foundation