Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Accelerating Distributed MoE Training and Inference with Lina

USENIX via YouTube

Overview

Explore a conference talk that delves into accelerating distributed Mixture of Experts (MoE) training and inference using Lina. Learn about the challenges of scaling model parameters and the potential of sparsely activated models to train larger models at lower costs. Discover the systematic analysis of all-to-all communication overhead in distributed MoE and understand the main causes of bottlenecks in training and inference. Examine Lina's innovative approach to addressing these bottlenecks through tensor partitioning and dynamic resource scheduling. Gain insights into how Lina improves training step time and reduces inference time compared to state-of-the-art systems, as demonstrated through experiments on an A100 GPU testbed.

Syllabus

USENIX ATC '23 - Accelerating Distributed MoE Training and Inference with Lina

Taught by

USENIX

Reviews

Start your review of Accelerating Distributed MoE Training and Inference with Lina

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.