Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Identifying Representations for Intervention Extrapolation

Valence Labs via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 33-minute conference talk on identifying representations for intervention extrapolation presented by Sorawit (James) Saengkyongam from Valence Labs. Delve into the concept of identifiable and causal representation learning for improving generalizability and robustness in machine learning. Examine the task of intervention extrapolation, which involves predicting the effects of unseen interventions on outcomes. Learn about the setup involving outcome Y, observed features X, latent features Z, and exogenous action variables A. Discover how identifiable representations can provide effective solutions for non-linear intervention effects. Understand the Rep4Ex approach, which combines intervention extrapolation with identifiable representation learning. Explore the theoretical findings on identifiability and the proposed method for enforcing linear invariance constraints. Follow along as the speaker validates the theoretical findings through synthetic experiments and demonstrates the success of the approach in predicting unseen intervention effects. Engage with the Q&A session to gain further insights into this cutting-edge research in causal representation learning.

Syllabus

- Introduction
- Intervention Extrapolation with Observed Z
- Intervention Extrapolation via Identifiable Representations
- Identification of the Unmixing Function
- Simulations
- Q+A

Taught by

Valence Labs

Reviews

Start your review of Identifying Representations for Intervention Extrapolation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.