Neural Nets for NLP - Latent Variable Models

Neural Nets for NLP - Latent Variable Models

Graham Neubig via YouTube Direct link

Problem! Sampling Breaks Backprop

11 of 21

11 of 21

Problem! Sampling Breaks Backprop

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Neural Nets for NLP - Latent Variable Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Discriminative vs. Generative Models
  3. 3 Quiz: What Types of Variables?
  4. 4 Why Latent Random Variables?
  5. 5 A Latent Variable Model
  6. 6 What is Our Loss Function? . We would like to maximize the corpus log likelihood
  7. 7 Disconnect Between Samples and Objective
  8. 8 VAE Objective . We can create an optimizable objective matching our problem, starting with KL divergence
  9. 9 Interpreting the VAE Objective
  10. 10 Problem: Straightforward Sampling is Inefficient Current
  11. 11 Problem! Sampling Breaks Backprop
  12. 12 Solution: Re-parameterization Trick
  13. 13 Generating from Language Models
  14. 14 Motivation for Latent Variables
  15. 15 Difficulties in Training
  16. 16 KL Divergence Annealing
  17. 17 Weaken the Decoder
  18. 18 Discrete Latent Variables?
  19. 19 Enumeration
  20. 20 Method 2: Sampling
  21. 21 Reparameterization (Maddison et al. 2017. Jang et al. 2017)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.