Creating End-to-End TinyML Applications for Ethos-U NPU in the Cloud

Creating End-to-End TinyML Applications for Ethos-U NPU in the Cloud

EDGE AI FOUNDATION via YouTube Direct link

Hardware supported vs non-supported operator in the NN Example of the benefit of using hardware supported operators on Ethos-U

5 of 7

5 of 7

Hardware supported vs non-supported operator in the NN Example of the benefit of using hardware supported operators on Ethos-U

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Creating End-to-End TinyML Applications for Ethos-U NPU in the Cloud

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Creating TinyML applications is difficult
  3. 3 Main software stack to run ML on Cortex-M today Cortex-Mis robust and flexible, Ethos-U is dedicated ML accelerator
  4. 4 Key steps to run an inference on Cortex-M Pre-processing and post-processing is specific to a model
  5. 5 Hardware supported vs non-supported operator in the NN Example of the benefit of using hardware supported operators on Ethos-U
  6. 6 Leverage the Weight Compression of the Arm Ethos-U NPU Pruning & clustering improves performance on memory-bound models
  7. 7 We provide a number of example applications!

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.