TinyML Talks - Demoing the World’s Fastest Inference Engine for Arm Cortex-M

TinyML Talks - Demoing the World’s Fastest Inference Engine for Arm Cortex-M

tinyML via YouTube Direct link

A much better memory plan

15 of 28

15 of 28

A much better memory plan

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

TinyML Talks - Demoing the World’s Fastest Inference Engine for Arm Cortex-M

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 tinyML Summit 2022 Miniature dreams can come true. March 28-30, 2022 Hyatt Regency San Francisco Airport
  3. 3 You might know us from: Person detecti Person Presence Detection
  4. 4 Or from: the world's fastest Cortex-M inferen
  5. 5 How did we get here?
  6. 6 The machine learning flow
  7. 7 The tasks of an inference engine
  8. 8 An inference engine example: TELM
  9. 9 A closer look at the results
  10. 10 More off-the-shelf models
  11. 11 A closer look at the MLPerf Tiny models
  12. 12 How to beat the competition?
  13. 13 Memory planning: a (rotated) game of T
  14. 14 Memory planning for an example model
  15. 15 A much better memory plan
  16. 16 Even better: lower granularity planning
  17. 17 Memory planning at Plumerai: summary
  18. 18 Optimized INT8 code for speed
  19. 19 Model-specific code generation
  20. 20 The world's fastest Cortex-M inference
  21. 21 What can Plumerai mean for you?
  22. 22 Public benchmarking service: try it yours
  23. 23 Arm: The Software and Hardware Foundation for tin
  24. 24 EDGE IMPULSE The leading edge ML pla
  25. 25 Enabling the next generation of Sensor and Hearable pro to process rich data with energy efficiency
  26. 26 maxim integrated Maxim Integrated: Enabling Edge Intelligence
  27. 27 BROAD AND SCALABLE EDGE COMPUTING PORTFOLIO
  28. 28 SYNTIANT

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.