Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the concept of in-context learning through a comprehensive lecture by Gregory Valiant from Stanford University. Delve into empirical efforts that illuminate fundamental aspects of this learning approach, which occurs at inference time without model parameter updates. Examine the efficiency of training Transformers and LSTMs to in-context learn basic function classes like linear models, sparse linear models, and small decision trees. Discover methods for evaluating in-context learning algorithms and understand the qualitative differences between various architectures in their ability to perform this type of learning. Investigate recent research findings on the connections between language modeling and learning, including whether good language models must possess in-context learning capabilities and if large language models can perform regression. Consider the potential applications of these primitives in language-centric tasks. Based primarily on collaborative work with Shivam Garg, Dimitris Tsipras, and Percy Liang, this talk provides valuable insights into the evolving field of in-context learning and its implications for AI and machine learning.
Syllabus
In-Context Learning: A Case Study of Simple Function Classes
Taught by
Simons Institute