Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Efficient Deep Learning Computing: From TinyML to Large Language Models

MIT HAN Lab via YouTube

Overview

Explore Ji Lin's groundbreaking PhD research on efficient deep learning computing in this 56-minute defense presentation from MIT. Dive into the world of TinyML and large language models as Lin discusses his pioneering work, including MCUNet series for on-device training, AMC, TSM, and quantization techniques like SmoothQuant and AWQ. Learn how these innovations have been widely adopted by industry leaders such as NVIDIA, Intel, and Hugging Face. Discover the impact of Lin's research, which has garnered over 8,500 citations and 8,000 GitHub stars, and has been featured in prominent tech publications. Gain insights into the future of efficient ML computing from an NVIDIA Graduate Fellowship Finalist and Qualcomm Innovation Fellowship recipient who has made significant contributions to the field of deep learning efficiency.

Syllabus

Ji Lin's PhD Defense, Efficient Deep Learning Computing: From TinyML to Large Language Model. @MIT

Taught by

MIT HAN Lab

Reviews

Start your review of Efficient Deep Learning Computing: From TinyML to Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.