Overview
Explore a tinyML Talks webcast on enabling neural networks for low-power edge devices. Discover Eta Compute's integrated approach to minimizing barriers in designing neural networks for ultra-low power operation, focusing on embedded vision applications. Learn about neural network optimization for embedded systems, hardware-software co-optimization for energy efficiency, and automatic inference code generation using a proprietary hardware-aware compiler tool. Gain insights into memory management, compute power optimization, and accuracy considerations for deploying neural networks in IoT and mobile devices. Understand the challenges and solutions in implementing neural networks on hardware-constrained embedded systems, with practical examples in people counting and AI vision applications.
Syllabus
Introduction
Agenda
Challenges
Current status
Tensorflow
Current version
Pipelines
Applications
People Counting
AI Vision
Neural Network
Summary
Questions
TinyML Tech Sponsors
Taught by
tinyML