VQ-VAEs - Neural Discrete Representation Learning - Paper + PyTorch Code Explained
Aleksa Gordić - The AI Epiphany via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive 35-minute video tutorial on VQ-VAEs (Vector Quantized Variational Autoencoders) and their application in neural discrete representation learning. Delve into the key concepts of autoencoders and VAEs before examining the motivation behind discrete representations. Gain a high-level understanding of the VQ-VAE framework, followed by an in-depth exploration of its components, including the VQ-VAE loss function. Study the PyTorch implementation and discuss the missing KL term. Investigate prior autoregressive models and analyze the results. Conclude with an introduction to VQ-VAE-2, highlighting its hierarchical structure of latents and priors. Access supplementary resources, including research papers and code examples, to enhance your understanding of this important concept in AI and machine learning.
Syllabus
Intro
A tangent on autoencoders and VAEs
Motivation behind discrete representations
High-level explanation of VQ-VAE framework
Diving deeper
VQ-VAE loss
PyTorch implementation
KL term missing
Prior autoregressive models
Results
VQ-VAE two
Taught by
Aleksa Gordić - The AI Epiphany