Cloud-Native LLM Deployments Made Easy Using LangChain
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore how to seamlessly deploy large language model (LLM) architectures using LangChain in a cloud-native environment. Learn about the challenges of deploying LLMs with billions of parameters and how LangChain, an open-source framework, simplifies the creation of gen AI interfaces. Discover how to combine LangChain with Kubernetes to manage complex architectures, balance computational requirements, and ensure efficient resource utilization. Follow a step-by-step walkthrough of deploying an end-to-end LLM containerized LangChain application in a cloud-native setting, demonstrating how to quickly and easily transition trained models into working applications. Gain insights into streamlining NLP components and leveraging Kubernetes for infrastructure management in this 34-minute conference talk presented by Ezequiel Lanza and Arun Gupta from Intel at a CNCF event.
Syllabus
Cloud-Native LLM Deployments Made Easy Using LangChain - Ezequiel Lanza & Arun Gupta, Intel
Taught by
CNCF [Cloud Native Computing Foundation]