Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Creating End-to-End TinyML Applications for Ethos-U NPU in the Cloud

EDGE AI FOUNDATION via YouTube

Overview

Learn how to develop end-to-end TinyML applications for Arm's Ethos-U NPU in a 17-minute conference talk from tinyML EMEA. Explore the capabilities of Arm Ethos-U55 and Ethos-U65 microNPUs, specifically designed for machine learning with optimal performance per Watt for tinyML neural networks. Discover key design considerations and common pitfalls when creating ML models for tinyML applications, while learning to leverage Arm Virtual Hardware for early software development and testing without physical silicon. Master the implementation of ML on Cortex-M systems, understanding pre-processing requirements, hardware-supported operators, and weight compression techniques using pruning and clustering for memory-bound models. Gain practical insights through example applications and learn how to streamline deployment for faster time-to-market once silicon becomes available.

Syllabus

Intro
Creating TinyML applications is difficult
Main software stack to run ML on Cortex-M today Cortex-Mis robust and flexible, Ethos-U is dedicated ML accelerator
Key steps to run an inference on Cortex-M Pre-processing and post-processing is specific to a model
Hardware supported vs non-supported operator in the NN Example of the benefit of using hardware supported operators on Ethos-U
Leverage the Weight Compression of the Arm Ethos-U NPU Pruning & clustering improves performance on memory-bound models
We provide a number of example applications!

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Creating End-to-End TinyML Applications for Ethos-U NPU in the Cloud

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.