Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Faster Inference Using Output Predictions with OpenAI and vLLM

Trelis Research via YouTube

Overview

Learn advanced techniques for accelerating inference speeds in language models through this 24-minute technical video. Explore three key approaches to faster model outputs: OpenAI's output predictions, Cursor's fast-apply functionality, and vLLM's speculative decoding. Dive deep into the mechanics of speculative decoding, understand its implementation with vLLM and Llama 8B, and discover practical applications using OpenAI's prediction capabilities. Compare speed improvements and cost implications across different approaches while gaining hands-on experience with code examples and real-world applications. Access comprehensive resources including slides, documentation links, and implementation guides to enhance your understanding of these cutting-edge inference optimization techniques.

Syllabus

OpenAI output predictions, Cursor fast-apply, vLLM speculative decoding
Cursor Fast Apply - how it works
Video Overview
How does speculative decoding work?
Using OpenAI Output Predictions
Speculative Decoding with vLLM and Llama 8B
Speed-up and Costs of Output Predictions
Resources

Taught by

Trelis Research

Reviews

Start your review of Faster Inference Using Output Predictions with OpenAI and vLLM

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.