Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Leverage Topology Modeling and Topology-Aware Scheduling to Accelerate LLM Training

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore how to leverage topology modeling and topology-aware scheduling to accelerate Large Language Model (LLM) training in this 45-minute conference talk by William Wang from Huawei at CNCF. Delve into the shift from computing to network bottlenecks in the LLM training and inference era, examining high-throughput and low-latency interconnect technologies like nvlink and nvswitch used in hyper-computers. Analyze the impact of inter-node communication and intra-node resource interconnects on AI workload performance, particularly for large language model training. Learn how to model topology on underlying resources such as NUMA, Rack, Super Pod, and Hyper Computer. Discover techniques for making schedulers topology-aware to optimize resource allocation and performance. Investigate methods to coordinate topology-aware scheduling with Device Resource Aggregation (DRA) on nodes, addressing Kubernetes' current limitations in efficiently handling topology awareness for AI workloads.

Syllabus

Leverage Topology Modeling and Topology-Aware Scheduling to Accelerate LLM Training - William Wang

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Leverage Topology Modeling and Topology-Aware Scheduling to Accelerate LLM Training

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.