Overview
Explore on-device training and transfer learning techniques in this lecture from MIT's course on TinyML and Efficient Deep Learning Computing. Dive into the memory bottleneck challenges of on-device training and discover efficient algorithms like TinyTL for on-device transfer learning. Examine system support for efficient on-device training and gain insights into implementing deep learning applications on resource-constrained devices. Learn from instructor Song Han as he covers topics such as model compression, pruning, quantization, neural architecture search, and distillation. Acquire hands-on experience in deploying neural networks on mobile devices, IoT devices, and microcontrollers through practical examples and open-ended design projects focused on mobile AI applications.
Syllabus
Lecture 15 - On-Device Training and Transfer Learning (Part I) | MIT 6.S965
Taught by
MIT HAN Lab