Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Unlocking the Full Potential of GPUs for AI Workloads on Kubernetes

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore the groundbreaking Dynamic Resource Allocation (DRA) feature in Kubernetes for optimizing GPU utilization in AI workloads. Delve into how this new approach revolutionizes resource scheduling by empowering third-party developers and moving beyond the limitations of traditional "countable" interfaces. Discover the extensive capabilities unlocked for GPU management, including controlled GPU sharing within and across pods, support for multiple GPU models per node, specification of arbitrary GPU constraints, and dynamic allocation of Multi-Instance GPUs (MIG). Learn about NVIDIA's DRA resource driver for GPUs, examining its key features and functionalities. Conclude with practical demonstrations showcasing how to implement and leverage this powerful tool in your Kubernetes environment, enabling more efficient and flexible GPU resource management for AI workloads.

Syllabus

Unlocking the Full Potential of GPUs for AI Workloads on Kubernetes - Kevin Klues, NVIDIA

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Unlocking the Full Potential of GPUs for AI Workloads on Kubernetes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.