Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the fundamentals of distributed training for neural networks in this lecture from MIT's course on TinyML and Efficient Deep Learning Computing. Delve into key concepts such as data parallelism and model parallelism, essential for scaling machine learning models across multiple devices. Learn how to overcome challenges in deploying neural networks on mobile and IoT devices, and discover techniques for accelerating training processes. Gain insights from instructor Song Han on efficient machine learning methods that enable powerful deep learning applications on resource-constrained devices. Access accompanying slides and course materials to enhance your understanding of distributed training strategies and their practical applications in mobile AI and IoT scenarios.
Syllabus
Lecture 13 - Distributed Training and Gradient Compression (Part I) | MIT 6.S965
Taught by
MIT HAN Lab