Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Nets for NLP - Models with Latent Random Variables

Graham Neubig via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore models with latent random variables in this comprehensive lecture from CMU's Neural Networks for NLP course. Delve into the distinctions between generative and discriminative models, as well as deterministic and random variables. Examine Variational Autoencoders (VAEs) in depth, including their structure, learning process, and challenges in training. Learn techniques for handling discrete latent variables and discover practical applications of VAEs in natural language processing. Gain insights into deep structured latent variable models and their importance in specifying interpretable structures like POS tags and dependency parse trees. Understand the probabilistic perspective on VAEs, explore variational inference methods, and study solutions to common training difficulties such as the re-parameterization trick and weakening the decoder.

Syllabus

Intro
Discriminative vs. Generative Models
Quiz: What Types of Variables?
What is Latent Random Variable Model
Why Latent Variable Models?
Deep Structured Latent Variable Models • Specify structure, but interpretable structure is often discrete e.g. POS tags, dependency parse trees
Examples of Deep Latent Variable Models
A probabilistic perspective on Variational Auto-Encoder
What is Our Loss Function?
Practice
Variational Inference • Variational inference approximates the true posterior poll with a family of distributions
Variational Inference • Variational inference approximates the true posterior polar with a family of distributions
Variational Auto-Encoders
Variational Autoencoders
Learning VAE
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Difficulties in Training . Of the two components in the VAE objective, the KL divergence term is much easier to learn
Solution 3
Weaken the Decoder
Discrete Latent Variables?
Method 1: Enumeration
Solution 4

Taught by

Graham Neubig

Reviews

Start your review of Neural Nets for NLP - Models with Latent Random Variables

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.