TinyML for All: Full-stack Optimization for Diverse Edge AI Platforms

TinyML for All: Full-stack Optimization for Diverse Edge AI Platforms

tinyML via YouTube Direct link

Intro

1 of 21

1 of 21

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

TinyML for All: Full-stack Optimization for Diverse Edge AI Platforms

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 TinyML is about Constraints
  3. 3 Everything Together: Real-world Al on Tiny MCUS
  4. 4 Brief History of MCUNets
  5. 5 Opportunity in Fundamental ML Algorithms
  6. 6 New Problem: Imbalanced Memory Distribution of CNNS
  7. 7 Solving the Imbalance with Patch-based Inference
  8. 8 MCUNet-v2 Takeaways
  9. 9 Once-for-All Network
  10. 10 Problem in Training for Tiny Models
  11. 11 NetAug for TinyML
  12. 12 Problem: Training Memory is much larger
  13. 13 TinyTL: Up to 6.5x Memory Saving without Accuracy Loss
  14. 14 Differentiable Augmentation
  15. 15 TinyML for LIDAR & Point Cloud
  16. 16 Full Stack LIDAR & Point Cloud Processing
  17. 17 Takeaways: Coming Back to MCUNets
  18. 18 Fundamental Problems in TinyML
  19. 19 OmniML "Compress" the Model Before Training
  20. 20 OmniML: Enable TinyML for All Vision Tasks
  21. 21 Founding Team

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.