Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the fascinating phenomenon of in-context learning (ICL) in pretrained transformers through this insightful lecture by Surya Ganguli from Stanford University. Delve into the fundamental question of whether ICL can solve tasks significantly different from those encountered during pretraining. Examine the performance of ICL on linear regression while varying the diversity of tasks in the pretraining dataset. Discover the existence of a task diversity threshold for the emergence of ICL and its implications. Learn how transformers behave like Bayesian estimators below this threshold and outperform them beyond it, aligning with ridge regression. Understand the critical role of task diversity in enabling transformers to solve new tasks in-context, deviating from the Bayes optimal estimator. Gain valuable insights into the interplay between task diversity, data scale, and model scale in the emergence of ICL capabilities.
Syllabus
Pretraining Task Diversity and the Emergence of Non-Bayesian In-Context Learning for Regression
Taught by
Simons Institute