Overview
Explore the statistical boundaries of offline reinforcement learning with function approximation in this 55-minute lecture by Sham Kakade from the University of Washington and Microsoft Research. Delve into key concepts including realizability, sequential decision making, coverage limits, and policy evaluation. Examine upper and lower bounds, practical considerations, and experimental results. Gain insights into the mathematics of online decision making and the interplay between models and features in reinforcement learning.
Syllabus
Intro
What is offline reinforcement learning
Intuition
Realizability
Sequential Decision Making
Standard Approach
Coverage
Limits
Policy Evaluation
Setting
Feature Mapping
Upper Limits
Lower Limits
Observations
Upper Bounds
Inequality
Simulation
Summary
Sufficient Conditions
Possible Results
Intuition and Construction
Practical Considerations
Follow Up
Experiments
Other Experiments
Model vs Feature
Taught by
Simons Institute