Completed
Generative Adversarial Networks (GANs)
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Deep Generative Modeling
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Supervised vs unsupervised learning Supervised Learning Unsupervised Learning
- 3 Generative modeling Goal Take as input training samples from some distribution and learn a model that represents that distribution
- 4 Why generative models? Debiasing
- 5 Why generative models? Outlier detection
- 6 What is a latent variable?
- 7 Autoencoders: background
- 8 Dimensionality of latent space reconstruction quality
- 9 Autoencoders for representation learning
- 10 Traditional autoencoders
- 11 VAEs: key difference with traditional autoencoder
- 12 VAE optimization
- 13 Intuition on regularization and the Normal prior
- 14 Reparametrizing the sampling layer
- 15 Why latent variable models? Debiasing
- 16 Generative Adversarial Networks (GANs)
- 17 Intuition behind GANS
- 18 Training GANs: loss function
- 19 GANs for image synthesis: latest results
- 20 Applications of paired translation
- 21 Paired translation: coloring from edges
- 22 Distribution transformations GANG