Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on learning causal representations from unknown, latent interventions in a general setting with Gaussian latent distribution and general mixing function. Delve into strong identifiability results for unknown single-node interventions, generalizing prior works focused on weaker classes. Discover the first instance of causal identifiability from non-paired interventions for deep neural network embeddings. Follow the proof sketch, uncovering high-dimensional geometric structure in the data distribution after non-linear density transformation through analysis of quadratic forms of precision matrices. Learn about a proposed contrastive algorithm for identifying latent variables in practice and its performance evaluation on various tasks. The lecture covers introduction, background, technical details, main theorem, proof sketch, experimental methodology, experiments, summary, and concludes with a discussion.