Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

dLoRA - Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving

USENIX via YouTube

Overview

Explore a cutting-edge conference talk on dLoRA, an innovative inference serving system for LoRA (Low-Rank Adaptation) models in large language model (LLM) serving. Delve into the dynamic orchestration of requests and LoRA adapters, focusing on two key aspects: dynamically merging and unmerging adapters with the base model, and migrating requests and adapters between worker replicas. Discover the insights behind these capabilities, including the impact of request skewness on adapter merging decisions and the load imbalance caused by varying input and output lengths in autoregressive LLM requests. Learn about the credit-based batching algorithm for merge/unmerge decisions and the request-adapter co-migration algorithm. Examine the impressive performance improvements achieved by dLoRA, with throughput increases of up to 57.9× and 26.0× compared to vLLM and HugginFace PEFT, respectively, and up to 1.8× lower average latency than the concurrent work S-LoRA.

Syllabus

OSDI '24 - dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving

Taught by

USENIX

Reviews

Start your review of dLoRA - Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.