Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for efficiently deploying and benchmarking Large Language Models (LLMs) in Kubernetes environments in this 30-minute conference talk from DevConf.US 2024. Learn how to measure key runtime performance metrics for LLMs and optimize their performance in production settings. Discover techniques for deploying LLMs using the Kserve stack, conducting load testing across various configurations, and capturing essential performance indicators such as tokens per second, time per output token, and time to first token. Gain insights into monitoring resource consumption metrics like GPU utilization and memory usage to identify performance bottlenecks. Understand how combining runtime performance metrics with LLM evaluation metrics can significantly enhance the optimization of LLM services in production environments.
Syllabus
Efficiently Deploying and Benchmarking LLMs in Kubernetes - DevConf.US 2024
Taught by
DevConf