Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

On-Device Speech Models Optimization and Deployment for Mobile Hardware

tinyML via YouTube

Overview

Explore on-device speech model optimization and deployment in this tinyML Summit 2022 presentation. Dive into the challenges of real-time execution on mobile hardware, focusing on latency and memory footprint constraints. Learn about streaming-aware model design using functional and subclass TensorFlow APIs, and discover various quantization techniques including post-training quantization and quantization-aware training. Compare the pros and cons of different approaches and understand selection criteria based on specific ML problems. Examine benchmarks of popular speech processing model topologies, including residual convolutional and transformer neural networks, as demonstrated on mobile devices. Gain insights into local self-attention, multi-head self-attention, and real-world model implementations to enhance your understanding of efficient on-device speech processing.

Syllabus

Introduction
Agenda
Hardware detector
Streaming
Subclass API
Edge cases
Quantization
Posttraining Quantization
Fake Quantization
Native Quantization
Observations
Local Selfattention
MultiHealth Selfattention
Real Model
Models

Taught by

tinyML

Reviews

Start your review of On-Device Speech Models Optimization and Deployment for Mobile Hardware

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.