Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Context Caching for Faster and Cheaper LLM Inference

Trelis Research via YouTube

Overview

Explore context caching techniques for faster and cheaper inference in large language models (LLMs) in this 35-minute video tutorial. Learn how context caching works, its two main types, and implementation strategies for Claude, Google Gemini, and SGLang. Discover the cost-saving potential of this advanced inference technique and gain practical insights into improving LLM performance. Access comprehensive resources, including code repositories, slides, and timestamps, to enhance your understanding of this cutting-edge topic in AI development.

Syllabus

Introduction to context caching for LLMs
Video Overview
How does context caching work?
Two types of caching
Context caching with Claude and Google Gemini
Context caching with Claude
Context caching with Gemini Flash or Gemini Pro
Context caching with SGLang works also with vLLM
Cost Comparison
Video Resources

Taught by

Trelis Research

Reviews

Start your review of Context Caching for Faster and Cheaper LLM Inference

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.