Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Deploying LLM Workloads on Kubernetes with WasmEdge and Kuasar

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the deployment of Large Language Model (LLM) workloads on Kubernetes using WasmEdge and Kuasar in this keynote presentation. Discover how these innovative solutions address challenges in running LLMs, including complex package installations, GPU compatibility issues, scaling limitations, and security vulnerabilities. Learn how WasmEdge enables the development of fast, agile, resource-efficient, and secure LLM applications, while Kuasar facilitates running applications on Kubernetes with faster container startup and reduced management overhead. Witness a demonstration of running Llama3-8B on a Kubernetes cluster using WasmEdge and Kuasar as container runtimes. Gain insights into how Kubernetes enhances efficiency, scalability, and stability in LLM deployment and operations, making this 14-minute presentation essential for those interested in advanced cloud-native AI solutions.

Syllabus

Keynote: Deploying LLM Workloads on Kubernetes by WasmEdge and Kuasar - Tianyang Zhang & Vivian Hu

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Deploying LLM Workloads on Kubernetes with WasmEdge and Kuasar

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.