Multiview and Self-Supervised Representation Learning - Nonlinear Mixture Identification
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 48-minute lecture on multiview and self-supervised representation learning from a nonlinear mixture identification perspective. Delve into the insights presented by Xiao Fu of Oregon State University at IPAM's Explainable AI for the Sciences workshop. Examine the central concept of representation learning and its importance in preventing overfitting and enhancing domain adaptation and transfer learning. Investigate two representation learning paradigms using multiple views of data, including naturally acquired and artificially produced multiview data. Analyze the effectiveness of multiview analysis tools like deep canonical correlation analysis and self-supervised learning paradigms such as BYOL and Barlow Twins. Discover an intuitive generative model of multiview data and learn how latent correlation maximization guarantees the extraction of shared components across views. Explore methods for disentangling private information from shared components and understand the implications for cross-view translation and data generation. Gain insights from a finite sample analysis in nonlinear mixture identifiability study and examine the practical applications of theoretical results and newly designed regularization techniques.
Syllabus
Xiao Fu - Multiview and Self-Supervised Representation Learning: Nonlinear Mixture Identification
Taught by
Institute for Pure & Applied Mathematics (IPAM)