Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Apache TVM: Optimizing ML Models for Edge Deployment - Deep Dive and Demo

Databricks via YouTube

Overview

Explore the world of machine learning optimization in this 49-minute deep dive video on Apache TVM. Learn how this open-source compiler transforms complex deep learning models into lightweight software for edge devices, significantly improving inference speed and reducing costs across various hardware platforms. Discover the inner workings of Apache TVM, its latest features, and upcoming developments. Follow along with a live demonstration on optimizing a custom machine learning model. Gain insights into AI compilation challenges, TVM internals, fusion techniques, auto-scheduling, and real-world performance results. Compare public and private models, and understand why TVM is becoming an essential tool for ML practitioners aiming to enhance model efficiency and deployment capabilities.

Syllabus

Introduction
AI Compilation Wars
Machine Learning Compilers
Who is using TVM
The landscape of deep learning
Highlevel optimizations
Operators and nodes
TVM internals
How to use TVM
Fusion
Auto Scheduler
Auto Scheduler workflow
Task Scheduler workflow
Realworld results
The best of both worlds
Auto scheduling
Why use TVM
Live Demo
Uploading a new model
Performance results
Crossproduct results
Comparing public vs private models
Outro

Taught by

Databricks

Reviews

Start your review of Apache TVM: Optimizing ML Models for Edge Deployment - Deep Dive and Demo

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.