Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention

USENIX via YouTube

Overview

Explore a cutting-edge approach to optimizing large language model (LLM) serving for multi-turn conversations in this 22-minute conference talk from USENIX ATC '24. Dive into the innovative CachedAttention mechanism, designed to significantly reduce computational overheads and serving costs associated with LLM interactions. Learn how this new attention mechanism enables the reuse of key-value (KV) caches across conversations, employing a hierarchical caching system and intelligent scheduling techniques. Discover strategies for efficient KV cache management, including layer-wise pre-loading, asynchronous saving, and scheduler-aware fetching and eviction schemes. Understand how CachedAttention addresses the challenge of context window overflow while maintaining the validity of saved KV caches. Examine the impressive experimental results, showcasing substantial improvements in time to first token, prompt prefilling throughput, and overall inference cost reduction for multi-turn conversations with LLMs.

Syllabus

USENIX ATC '24 - Cost-Efficient Large Language Model Serving for Multi-turn Conversations with...

Taught by

USENIX

Reviews

Start your review of Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.