Completed
Reducing Bellman equation using features
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
On the Statistical Complexity of Reinforcement Learning
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Tabular Markov decision process
- 3 Prior efforts: algorithms and sample complexity results
- 4 Minimax optimal sample complexity of tabular MDP
- 5 Adding some structure: state feature map
- 6 Representing value function using linear combination of features
- 7 Rethinking Bellman equation
- 8 Reducing Bellman equation using features
- 9 Sample complexity of RL with features
- 10 Of-Policy Policy Evaluation (OPE)
- 11 OPE with function approximation
- 12 Equivalence to plug-in estimation
- 13 Minimax-optimal batch policy evaluation
- 14 Lower Bound Analysis
- 15 Episodic Reinforcement Learning
- 16 Feature space embedding of transition kernel
- 17 Regret Analysis
- 18 Exploration with Value-Targeted Regression VTAL