Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing vLLM Performance Through Quantization - Model Compression Techniques

Anyscale via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of model quantization for vLLM deployments in this conference talk from Ray Summit 2024, where Neural Magic experts Michael Goin and Robert Shaw demonstrate how to optimize large language model performance. Learn about vLLM's comprehensive support for multiple quantization methods, including FP8, INT8, and INT4, and understand how these techniques significantly reduce memory consumption while boosting generation speeds. Discover the underlying mechanisms of vLLM's quantization implementation and gain practical knowledge on implementing these optimization strategies for custom models using the llm-compressor framework. Master the art of balancing performance and resource efficiency in large-scale AI applications through detailed explanations and real-world implementation guidance.

Syllabus

Optimizing vLLM Performance through Quantization | Ray Summit 2024

Taught by

Anyscale

Reviews

Start your review of Optimizing vLLM Performance Through Quantization - Model Compression Techniques

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.