Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Structured Quantization for Neural Network Language Model Compression

tinyML via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 32-minute conference talk from tinyML Asia 2020 focusing on structured quantization techniques for neural network language model compression. Delve into the challenges of large memory consumption in resource-constrained scenarios and discover how advanced structured quantization methods can achieve high compression ratios of 70-100 without compromising performance. Learn about various compression approaches, including pruning, fixed-point quantization, product quantization, and binarization. Examine the impact on speech recognition performance and compare results with full precision models. Gain insights into the application of these techniques to word embeddings and neural network architectures in the context of natural language processing and speech recognition.

Syllabus

Introduction
Neural network vs NLP
Language model
Memory
Neural Network
Word Embedding
Neural Network Size
General Approach
Pruning
Quantization based approaches
Fixed point quantization
Product quantization
Speed recognition performance
Binarization
Embedding Matrix
Full Precision Model
Two Methods
Results
Conclusion
Question
Sponsors

Taught by

tinyML

Reviews

Start your review of Structured Quantization for Neural Network Language Model Compression

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.