Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the statistical complexity of reinforcement learning in this 53-minute lecture by Sham Kakade from Harvard and Microsoft Research. Delve into the fundamental question of what conditions enable generalization and avoid the curse of dimensionality in reinforcement learning. Compare the well-understood theoretical foundations of supervised learning with the challenges in reinforcement learning. Examine recent advances in characterizing when generalization is possible in both online and offline reinforcement learning settings. Learn about the newly introduced complexity measure, the Decision-Estimation Coefficient, and its significance in sample-efficient interactive learning. Cover topics such as linear methods, sufficient conditions, bilinear classes, and intuition behind complexity measures in reinforcement learning.
Syllabus
Introduction
Overview
Supervised Learning
RL
Basic Results
Reinforcement Learning Problems
Two Extremes
Talk Outline
Example
Linear Methods
Sufficient Conditions
Bilinear Classes
Intuition
Complexity measure
Good for
Summary
Discussion
Taught by
Simons Institute