Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Saving 95% of Your Edge Power with Sparsity to Enable TinyML

tinyML via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore techniques for reducing power consumption in edge machine learning applications through a tinyML Talks webcast featuring Jon Tapson from GrAI Matter Labs. Learn about the unique characteristics of edge ML tasks, focusing on continuous real-time processes with streaming data. Discover how exploiting multiple types of sparsity can significantly reduce computation needs, leading to lower latency and power consumption for tiny ML tasks. Gain insights into time, space, connectivity, and activation sparsity in edge processes and their practical impact on computation. Get introduced to the GrAI Core architecture and its event-based paradigm for maximizing sparsity exploitation in edge inference loads. Understand how these techniques can save up to 95% of edge power, enabling more efficient tinyML applications.

Syllabus

Intro
About Jon Tapson
Edge workloads are different
Edge data is massive
Speech waveforms
What is sparsity
Deep neural networks
Fanout
Basic CNN
Typical gains
Neural Network Accelerator
How it works
Events
Use cases
Software stack
Runtime support
Sparsity performance
Summary
Questions
Conclusion
Edge Impulse
Sponsor
Next talk
Thanks

Taught by

tinyML

Reviews

Start your review of Saving 95% of Your Edge Power with Sparsity to Enable TinyML

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.