Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Distributed Training and Gradient Compression - Lecture 14

MIT HAN Lab via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore distributed training challenges and solutions in this lecture from MIT's 6.S965 course. Dive into communication bottlenecks like bandwidth and latency, and learn about gradient compression techniques including gradient pruning and quantization. Discover how delayed gradient averaging can mitigate latency issues in distributed training. Gain insights into efficient machine learning methods for deploying neural networks on resource-constrained devices. Examine topics such as model compression, neural architecture search, and on-device transfer learning. Apply these concepts to optimize deep learning applications for videos, point clouds, and NLP tasks. Access accompanying slides and resources to enhance your understanding of efficient deep learning computing and TinyML.

Syllabus

Intro
Problems of Distributed Training
Reduce Transfer Data Size Recall the workflow of Parameter-Server Based Distributed Training
Limitations of Sparse Communication
Optimizers with Momentum Repeat, update weights
Deep Gradient Compression
Comparison of Gradient Pruning Method
Latency Bottleneck
High Network Latency Slows Federated Lea
Conventional Algorithms Suffer from High La Vanilla Distributed Synchronous SGD
Delayed Gradient Averaging
DGA Accuracy Evaluation
Real-world Benchmark
Summary of Today's Lecture
References

Taught by

MIT HAN Lab

Reviews

Start your review of Distributed Training and Gradient Compression - Lecture 14

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.