Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fairness in Serving Large Language Models

USENIX via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 16-minute conference talk from USENIX's OSDI '24 program that delves into the challenges of ensuring fairness in serving Large Language Models (LLMs). Learn about the novel Virtual Token Counter (VTC) scheduling algorithm designed to address the unique challenges posed by LLM inference services. Discover how this approach improves upon traditional request rate limits by accounting for input and output tokens processed, leading to better resource utilization and client experience. Examine the proof of a 2× tight upper bound on service differences between backlogged clients and understand how VTC outperforms baseline methods in various conditions. Gain insights into the complexities of fair scheduling for LLMs, considering their unpredictable request lengths and batching characteristics on parallel accelerators. Access the reproducible code and dive deeper into this cutting-edge research on fairness in LLM serving.

Syllabus

OSDI '24 - Fairness in Serving Large Language Models

Taught by

USENIX

Reviews

Start your review of Fairness in Serving Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.