Overview
Explore how to efficiently scale AI workloads using Kubernetes by sharing GPU resources across multiple containers in this informative conference talk. Delve into the challenges of GPU resource management and learn various techniques for optimizing GPU usage. Discover how to set resource limits to ensure fair and efficient allocation of GPU resources among containers. Gain a solid understanding of leveraging Kubernetes and the NVIDIA device plugin to maximize GPU investments and achieve faster, more accurate results in AI applications. By the end of the talk, acquire valuable insights into overcoming GPU resource bottlenecks and efficiently serving AI workloads in a containerized environment.
Syllabus
Scaling AI Workloads with Kubernetes: Sharing GPU Resources Across Multiple Containers - Jack Ong
Taught by
Linux Foundation