Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

USHER: Holistic Interference Avoidance for Resource Optimized ML Inference

USENIX via YouTube

Overview

Explore a 15-minute conference talk from USENIX OSDI '24 that introduces USHER, a novel system for optimizing machine learning inference serving. Learn how USHER maximizes resource utilization while avoiding inter-model interference on GPUs. Discover the three key components of USHER: a fast GPU kernel-based model resource estimator, an interference-aware scheduler for optimizing batch size and model placement, and an operator graph merger to minimize GPU cache interference. Understand how USHER achieves significantly higher goodput and cost-efficiency compared to existing methods, with the ability to scale to thousands of GPUs. Gain insights into techniques for minimizing monetary costs and maximizing performance in deep learning inference systems.

Syllabus

OSDI '24 - USHER: Holistic Interference Avoidance for Resource Optimized ML Inference

Taught by

USENIX

Reviews

Start your review of USHER: Holistic Interference Avoidance for Resource Optimized ML Inference

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.