Completed
How did we get here?
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
TinyML Talks - Demoing the World’s Fastest Inference Engine for Arm Cortex-M
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 tinyML Summit 2022 Miniature dreams can come true. March 28-30, 2022 Hyatt Regency San Francisco Airport
- 3 You might know us from: Person detecti Person Presence Detection
- 4 Or from: the world's fastest Cortex-M inferen
- 5 How did we get here?
- 6 The machine learning flow
- 7 The tasks of an inference engine
- 8 An inference engine example: TELM
- 9 A closer look at the results
- 10 More off-the-shelf models
- 11 A closer look at the MLPerf Tiny models
- 12 How to beat the competition?
- 13 Memory planning: a (rotated) game of T
- 14 Memory planning for an example model
- 15 A much better memory plan
- 16 Even better: lower granularity planning
- 17 Memory planning at Plumerai: summary
- 18 Optimized INT8 code for speed
- 19 Model-specific code generation
- 20 The world's fastest Cortex-M inference
- 21 What can Plumerai mean for you?
- 22 Public benchmarking service: try it yours
- 23 Arm: The Software and Hardware Foundation for tin
- 24 EDGE IMPULSE The leading edge ML pla
- 25 Enabling the next generation of Sensor and Hearable pro to process rich data with energy efficiency
- 26 maxim integrated Maxim Integrated: Enabling Edge Intelligence
- 27 BROAD AND SCALABLE EDGE COMPUTING PORTFOLIO
- 28 SYNTIANT