Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Nets for NLP - Models with Latent Random Variables

Graham Neubig via YouTube

Overview

Explore a comprehensive lecture on models with latent random variables in neural networks for natural language processing. Delve into the distinctions between generative and discriminative models, as well as deterministic and random variables. Gain insights into Variational Autoencoders, their architecture, and loss functions. Learn techniques for handling discrete latent variables, including the Gumbel-Softmax trick. Examine real-world applications of these concepts in NLP tasks, such as language modeling and semantic similarity. Engage with discussion questions to deepen understanding of tree-structured latent variables and their implications for NLP models.

Syllabus

Introduction
Discriminative vs generative
Observed vs latent variables
Quiz
Latent Variable Models
Types of latent random variables
Example
Loss Function
Variational inference
Reconstruction loss and kl regularizer
Regularized auto encoder
Regularized autoencoder
Learning the VAE
Reparameterization Trick
General
Language
VAE
Reparameterization
Motivation
Consistency
Semantic Similarity
Solutions
Free Bits
Weaken Decoder
Aggressive Inference Network
Handling Discrete latent variables
Discrete latent variables
Sampling discrete variables
Gumball softmax
Application examples
Discrete random variables
Tree structured latent variables
Discussion question

Taught by

Graham Neubig

Reviews

Start your review of Neural Nets for NLP - Models with Latent Random Variables

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.