Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the surprising capabilities of passive learning in understanding causality and experimentation in this 56-minute talk by Andrew Lampinen from Google DeepMind. Delve into the distinction between observational and passive learning, and discover how language models can acquire causal strategies through passive imitation of expert interventional data. Examine empirical evidence showing how agents can apply these strategies to uncover novel causal structures, even in complex environments with high-dimensional observations. Learn about the role of natural language explanations in enhancing generalization, including out-of-distribution scenarios with confounded training data. Investigate how language models, trained solely on next-word prediction, can extrapolate causal intervention strategies from few-shot prompts. Reflect on the implications of these findings for understanding language model behaviors and capabilities, and consider open questions regarding AI's use of explanations in a more human-like manner.