Overview
Explore the fundamental role of representation learning in neural networks and its impact on advancing deep learning algorithms in this 45-minute conference talk. Delve into the information bottleneck analysis of deep learning algorithms, gaining insights into learning processes and patterns across layers of learned representations. Examine how this analysis provides practical perspectives on theoretical concepts in deep learning research, including nuisance insensitivity and disentanglement. Cover topics such as perception tasks, feature engineering, information plans, geometric clustering, and representation space, concluding with a comprehensive recap of the discussed concepts.
Syllabus
Introduction
Agenda
Perception tasks
Representation learning
Black boxes
Feature engineering
Information plan
Rafts
Bottom Line
Nuisance
Exceptions
Entanglement
Total Correlation
Geometric Clustering
Representation Space
Recap
Taught by
Open Data Science