Explore the challenges and solutions of building a cloud-agnostic, privately hosted Large Language Model (LLM) serving platform on Kubernetes in this 26-minute conference talk. Discover how Predibase tackled the complexities of hosting LLMs, including their large size and GPU resource requirements. Learn about the architecture of their control plane and dataplane, secured with an Istio service mesh, and the implementation of KEDA for event-driven auto-scaling to support serverless inference of open-source models. Gain valuable insights into deploying LLMs and acquire practical knowledge on applying tools and techniques for your own organization's LLM hosting needs.
Building a Multi-Cluster Privately Hosted LLM Serving Platform on Kubernetes
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Syllabus
Building a Multi-Cluster Privately Hosted LLM Serving Platform on Ku... Julian Bright & Noah Yoshida
Taught by
CNCF [Cloud Native Computing Foundation]