Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Multiview and Self-Supervised Representation Learning - Nonlinear Mixture Identification

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Explore a 48-minute lecture on multiview and self-supervised representation learning from a nonlinear mixture identification perspective. Delve into the insights presented by Xiao Fu of Oregon State University at IPAM's Explainable AI for the Sciences workshop. Examine the central concept of representation learning and its importance in preventing overfitting and enhancing domain adaptation and transfer learning. Investigate two representation learning paradigms using multiple views of data, including naturally acquired and artificially produced multiview data. Analyze the effectiveness of multiview analysis tools like deep canonical correlation analysis and self-supervised learning paradigms such as BYOL and Barlow Twins. Discover an intuitive generative model of multiview data and learn how latent correlation maximization guarantees the extraction of shared components across views. Explore methods for disentangling private information from shared components and understand the implications for cross-view translation and data generation. Gain insights from a finite sample analysis in nonlinear mixture identifiability study and examine the practical applications of theoretical results and newly designed regularization techniques.

Syllabus

Xiao Fu - Multiview and Self-Supervised Representation Learning: Nonlinear Mixture Identification

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Multiview and Self-Supervised Representation Learning - Nonlinear Mixture Identification

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.