Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Distributed Training: Hybrid Parallelism and Gradient Optimization - Lecture 20

MIT HAN Lab via YouTube

Overview

Learn advanced distributed training concepts in this MIT lecture covering hybrid parallelism, auto-parallelization techniques, and strategies for overcoming bandwidth and latency bottlenecks in machine learning systems. Explore gradient compression methods including gradient pruning with sparse communication and deep gradient compression, as well as gradient quantization approaches like 1-Bit SGD and TernGrad. Understand how delayed gradient updates can address latency challenges in distributed training environments. Delivered by Professor Song Han as part of the MIT 6.5940 course, this 59-minute lecture provides essential knowledge for implementing efficient distributed machine learning systems.

Syllabus

EfficientML.ai Lecture 20 - Distributed Training Part 2 (MIT 6.5940, Fall 2024)

Taught by

MIT HAN Lab

Reviews

Start your review of Distributed Training: Hybrid Parallelism and Gradient Optimization - Lecture 20

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.