Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Effortless Scalability - Orchestrating Large Language Model Inference with Kubernetes

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of deploying and orchestrating large open-source inference models on Kubernetes in this 23-minute conference talk by Joinal Ahmed from Navatech AI. Discover how to automate the deployment of heavyweight models like Falcon and Llama 2 using Kubernetes Custom Resource Definitions (CRDs) for seamless management of large model files through container images. Learn about streamlining deployment with an HTTP server for inference calls, eliminating manual tuning of deployment parameters, and auto-provisioning GPU nodes based on specific model requirements. Gain insights into empowering users to deploy containerized models effortlessly by providing pod templates in the workspace custom resource inference field, enabling dynamic creation of deployment workloads that utilize all GPU nodes.

Syllabus

Effortless Scalability: Orchestrating Large Language Model Inference with Kubernetes - Joinal Ahmed

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Effortless Scalability - Orchestrating Large Language Model Inference with Kubernetes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.