Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Orca - A Distributed Serving System for Transformer-Based Generative Models

USENIX via YouTube

Overview

Explore a conference talk on Orca, a distributed serving system designed for Transformer-based generative models. Delve into the challenges of serving large-scale language models like GPT-3 and discover innovative solutions such as iteration-level scheduling and selective batching. Learn how these techniques significantly improve latency and throughput compared to existing systems. Gain insights into the architecture and scheduling mechanisms of Orca, which enable efficient processing of multi-iteration workloads for autoregressive token generation. Understand the importance of system support for serving cutting-edge generative AI models and how Orca addresses the limitations of current inference serving systems.

Syllabus

Intro
Generative Models
Inference of Generative Language M
Serving of Generative Language Mo
Problem 1: Request-Level Schedulin
Solution 1: Iteration-Level Schedulin
Problem 2: Batching
Solution 2: Selective Batching
Orca System Architecture
Scheduling

Taught by

USENIX

Reviews

Start your review of Orca - A Distributed Serving System for Transformer-Based Generative Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.