Overview
Explore the emerging field of Causal Representation Learning (CRL) in this comprehensive tutorial. Delve into the core technical problems and assumptions driving CRL, which aims to learn causal models and mechanisms from low-level observations like text, images, or biological measurements. Gain strong intuitions about the challenges of nonlinearity and the necessity of assumptions in causal representations. Discover various learning signals, including time contrastive learning, tree-based regularization, and sparse mechanisms. Examine the potential applications of CRL in scientific discovery and AI development. Conclude with a discussion on open questions and future directions for this exciting area of research.
Syllabus
- Intro
- How we got here
- What would it take to build an AI bench scientist
- The setup
- The challenge of nonlinearity
- No causal representations without assumptions
- Time contrastive learning
- Switchover: Dhanya Sridhar
- What other learning signals can we use?
- Tree-based regularization
- Sparse mechanisms
- Multiple views and sparsity
- Concluding questions
Taught by
Valence Labs