Overview
Explore causal representation learning through soft interventions in this comprehensive lecture by Jiaqi Zhang from Valence Labs. Delve into the concept of causal disentanglement, which aims to uncover data representations using latent variables interrelated through a causal model. Learn about identifiability in latent models and how it can be achieved with unobserved causal variables using a generalized notion of faithfulness. Discover how this framework enables the prediction of unseen intervention combinations, particularly in the context of single-cell biology. Follow along as Zhang presents a scalable algorithm based on auto-encoding variational Bayes for implementing causal disentanglement. The lecture covers key topics including linear causal disentanglement, structural causal models, setup, identifiability, algorithm design, loss functions, and results. Conclude with a Q&A session to deepen your understanding of this cutting-edge approach to causal representation learning.
Syllabus
- Introduction
- Linear Causal Disentanglement via Interventions
- Structural Causal Models
- Setup
- Identifiability
- Algorithm
- Loss
- Results
- Conclusions
- Q&A
Taught by
Valence Labs