Unleashing the Power of Dynamic Resource Allocation for Just-in-Time GPU Slicing
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Explore the innovative approach of Dynamic Resource Allocation (DRA) for just-in-time GPU slicing in this 39-minute conference talk from the Cloud Native Computing Foundation (CNCF). Discover how AI/ML experts can dynamically allocate GPUs and GPU slices based on workload demand in Kubernetes clusters. Learn about the challenges of implementing DRA, including changes to Kubernetes scheduling mechanisms and the introduction of new resource classes and claims. Examine the InstaSlice solution, which enables just-in-time GPU slicing on large production Kubernetes clusters without requiring changes to queued workloads or Kubernetes schedulers. Gain insights into optimizing GPU utilization for training, fine-tuning, and serving large language models (LLMs) in cloud-native environments.
Syllabus
Unleashing the Power of DRA (Dynamic Resource Allocation) for Just-in-Time GPU Slicing
Taught by
CNCF [Cloud Native Computing Foundation]