Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Enabling Neural Network at the Low Power Edge - A Neural Network Compiler

tinyML via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a tinyML Talks webcast on enabling neural networks for low-power edge devices. Discover Eta Compute's integrated approach to minimizing barriers in designing neural networks for ultra-low power operation, focusing on embedded vision applications. Learn about neural network optimization for embedded systems, hardware-software co-optimization for energy efficiency, and automatic inference code generation using a proprietary hardware-aware compiler tool. Gain insights into memory management, compute power optimization, and accuracy considerations for deploying neural networks in IoT and mobile devices. Understand the challenges and solutions in implementing neural networks on hardware-constrained embedded systems, with practical examples in people counting and AI vision applications.

Syllabus

Introduction
Agenda
Challenges
Current status
Tensorflow
Current version
Pipelines
Applications
People Counting
AI Vision
Neural Network
Summary
Questions
TinyML Tech Sponsors

Taught by

tinyML

Reviews

Start your review of Enabling Neural Network at the Low Power Edge - A Neural Network Compiler

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.