Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Distributed Training: Hybrid Parallelism and Gradient Optimization - Lecture 20

MIT HAN Lab via YouTube

Overview

Learn advanced distributed training concepts in machine learning through a recorded MIT lecture that explores hybrid parallelism, auto-parallelization techniques, and strategies for overcoming bandwidth and latency bottlenecks. Dive deep into gradient compression methods including gradient pruning for sparse communication, deep gradient compression, and gradient quantization techniques like 1-Bit SGD and TernGrad. Master the implementation of delayed gradient updates while understanding their role in addressing latency challenges in distributed systems. Taught by Professor Song Han, this comprehensive lecture from MIT's 6.5940 course provides essential knowledge for optimizing large-scale machine learning training processes.

Syllabus

EfficientML.ai Lecture 20 - Distributed Training Part 2 (Zoom Recording) (MIT 6.5940, Fall 2024)

Taught by

MIT HAN Lab

Reviews

Start your review of Distributed Training: Hybrid Parallelism and Gradient Optimization - Lecture 20

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.