Overview
Syllabus
Intro
Creating TinyML applications is difficult
Main software stack to run ML on Cortex-M today Cortex-Mis robust and flexible, Ethos-U is dedicated ML accelerator
Key steps to run an inference on Cortex-M Pre-processing and post-processing is specific to a model
Hardware supported vs non-supported operator in the NN Example of the benefit of using hardware supported operators on Ethos-U
Leverage the Weight Compression of the Arm Ethos-U NPU Pruning & clustering improves performance on memory-bound models
We provide a number of example applications!
Taught by
EDGE AI FOUNDATION