Overview
Explore latent variable models in advanced natural language processing through this comprehensive lecture. Delve into generative vs. discriminative models, deterministic vs. random variables, and variational autoencoders. Learn about handling discrete latent variables, examine examples of variational autoencoders in NLP, and understand the difference between learning features and learning structure. Cover topics such as loss functions, variational inference, regularized autoencoders, sampling techniques, and the motivation behind using latent variables. Discover training methods for VAEs, including aggressive inference network learning, and explore the reparameterization trick and Gumbel-Softmax function. Gain insights into practical applications of these concepts in NLP tasks.
Syllabus
Introduction
Types of Variables
Latent Variable Models
Loss Function
Variational inference
Regularized Autoencoder
Sampling
ancestral sampling
conditioned language models
Motivation for latent variables
Training VAEs
Aggressive inference network learning
Latent variables
Discrete latent variables
Reparameterization
Random Sampling
Reparameterization Trick
Gumball Softmax
Gumball Function
Application Examples
Taught by
Graham Neubig