Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Andreas Geiger via YouTube

Overview

Explore the concept of nonlinear disentanglement in natural data through temporal sparse coding in this 52-minute talk by Yash Sharma at the Tübingen seminar series of the Autonomous Vision Group. Delve into unsupervised representation learning techniques for disentangling underlying factors of variation in naturalistic videos. Examine the SlowVAE model, which leverages temporally sparse distributions to achieve disentanglement without assumptions on the number of changing factors. Learn about the proof of identifiability and the model's performance on benchmark datasets. Discover new video datasets with natural dynamics, Natural Sprites and KITTI Masks, introduced as benchmarks for disentanglement research. Gain insights into time contrastive learning, permutation contrastive learning, and the Slow Variational Autoencoder. Explore results on various datasets and consider open questions in the field of disentanglement in machine learning.

Syllabus

Intro
Overview
What is Disentanglement?
Disentanglement Methods
What about time?
Time Contrastive Learning (TCL)
Why does this work?
Permutation Contrastive Learning (PCL)
What about reality?
Identifiability Proof Intuition
Slow Variational Autoencoder (Slow VAE)
Disentanglement Lib
Results on DSprites
Results on KITTI Masks
Natural Sprites and KITTI Masks
PCL & Ada-GVAE
PCL Simulation
Open Questions

Taught by

Andreas Geiger

Reviews

Start your review of Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.