Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

PCA, AE, K-Means, Gaussian Mixture Model, Sparse Coding, and Intuitive VAE

Alfredo Canziani via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore advanced machine learning techniques in this comprehensive lecture by Yann LeCun. Dive into Principal Component Analysis (PCA), Auto-encoders, K-means clustering, Gaussian mixture models, sparse coding, and Variational Autoencoders (VAE). Learn about training methods, architectural approaches, and regularized Energy-Based Models (EBM). Gain insights into unconditional regularized latent variable EBMs, amortized inference, convolutional sparse coding, and video prediction. Benefit from in-depth Q&A sessions on labels, supervised learning, norms, and posterior distributions. Enhance your understanding with practical examples using MNIST and natural patches, and explore intuitive interpretations of VAEs.

Syllabus

– Welcome to class
– Training methods revisited
– Architectural methods
– 1. PCA
– Q&A on Definitions: Labels, unconditional, and un, selfsupervised learning
– 2. Auto-encoder with Bottleneck
– 3. K-Means
– 4. Gaussian mixture model
– Regularized EBM
– Yann out of context
– Q&A on Norms and Posterior: when the student is thinking too far ahead
– 1. Unconditional regularized latent variable EBM: Sparse coding
– Sparse modeling on MNIST & natural patches
– 2. Amortized inference
– ISTA algorithm & RNN Encoder
– 3. Convolutional sparce coding
– 4. Video prediction: very briefly
– 5. VAE: an intuitive interpretation
– Helpful whiteboard stuff
– Another interpretation

Taught by

Alfredo Canziani

Reviews

Start your review of PCA, AE, K-Means, Gaussian Mixture Model, Sparse Coding, and Intuitive VAE

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.