Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

AI Model Efficiency Toolkit (AIMET) - Lecture 25

MIT HAN Lab via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the AI Model Efficiency Toolkit (AIMET) in this guest lecture from Qualcomm AI Research. Dive into the world of efficient machine learning techniques for deploying neural networks on resource-constrained devices. Learn about model compression, pruning, quantization, neural architecture search, and distillation. Discover efficient training methods like gradient compression and on-device transfer learning. Examine application-specific model optimization for videos, point cloud, and NLP. Gain insights into efficient quantum machine learning. Understand the importance of AI model efficiency in the context of mobile and IoT devices. Explore community research, core technologies, and industry hardware. Get an overview of the Qualcomm AI Engine and Snap Driver. Witness demos on video understanding and super resolution. This lecture is part of the MIT 6.S965 course on TinyML and Efficient Deep Learning Computing, instructed by Song Han.

Syllabus

Introduction
Welcome
Why AI Model Efficiency
Community Research
Core Technologies
Area of Interest
Add Around
Autopart
Training
Results
Use Cases
Industry Hardware
GitHub
Training Pipeline
Whitepaper
Qualcomm AI Engine
Qualcomm Snap Driver
Demos
Video Understanding Demo
Full System View
Super Resolution

Taught by

MIT HAN Lab

Reviews

Start your review of AI Model Efficiency Toolkit (AIMET) - Lecture 25

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.