Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?

Simons Institute via YouTube

Overview

Explore the statistical boundaries of offline reinforcement learning with function approximation in this 55-minute lecture by Sham Kakade from the University of Washington and Microsoft Research. Delve into key concepts including realizability, sequential decision making, coverage limits, and policy evaluation. Examine upper and lower bounds, practical considerations, and experimental results. Gain insights into the mathematics of online decision making and the interplay between models and features in reinforcement learning.

Syllabus

Intro
What is offline reinforcement learning
Intuition
Realizability
Sequential Decision Making
Standard Approach
Coverage
Limits
Policy Evaluation
Setting
Feature Mapping
Upper Limits
Lower Limits
Observations
Upper Bounds
Inequality
Simulation
Summary
Sufficient Conditions
Possible Results
Intuition and Construction
Practical Considerations
Follow Up
Experiments
Other Experiments
Model vs Feature

Taught by

Simons Institute

Reviews

Start your review of What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.