Completed
Variational Auto-Encoders
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Neural Nets for NLP - Models with Latent Random Variables
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Discriminative vs. Generative Models
- 3 Quiz: What Types of Variables?
- 4 What is Latent Random Variable Model
- 5 Why Latent Variable Models?
- 6 Deep Structured Latent Variable Models • Specify structure, but interpretable structure is often discrete e.g. POS tags, dependency parse trees
- 7 Examples of Deep Latent Variable Models
- 8 A probabilistic perspective on Variational Auto-Encoder
- 9 What is Our Loss Function?
- 10 Practice
- 11 Variational Inference • Variational inference approximates the true posterior poll with a family of distributions
- 12 Variational Inference • Variational inference approximates the true posterior polar with a family of distributions
- 13 Variational Auto-Encoders
- 14 Variational Autoencoders
- 15 Learning VAE
- 16 Problem! Sampling Breaks Backprop
- 17 Solution: Re-parameterization Trick
- 18 Difficulties in Training . Of the two components in the VAE objective, the KL divergence term is much easier to learn
- 19 Solution 3
- 20 Weaken the Decoder
- 21 Discrete Latent Variables?
- 22 Method 1: Enumeration
- 23 Solution 4