Optimize LLM Workflows with Smart Infrastructure Enhanced by Volcano
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for optimizing Large Language Model (LLM) workflows using smart infrastructure enhanced by Volcano in this informative conference talk. Discover how to effectively manage large-scale LLM training and inference platforms while addressing critical challenges such as training efficiency, fault tolerance, resource fragmentation, operational costs, and topology-aware scheduling. Learn about fault detection techniques, fast job recovery, and self-healing mechanisms that significantly improve efficiency. Gain insights into handling long downtime in LLM training on heterogeneous GPUs, implementing intelligent GPU workload scheduling to reduce resource fragmentation and costs, and leveraging topology-aware scheduling on rack/supernode systems to accelerate LLM training. Benefit from real-world experiences shared by the speakers in managing thousands of GPUs and handling monthly workloads involving numerous LLM training and inference jobs in a cloud-native AI platform environment.
Syllabus
Optimize LLM Workflows with Smart Infrastructure Enhanced by Volcano - Xin Li & Xuzheng Chang
Taught by
CNCF [Cloud Native Computing Foundation]