Completed
Reducing Bellman equation using features
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Reinforcement Learning in Feature Space: Complexity and Regret
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Markov decision process
- 3 What does a sample mean?
- 4 Complexity and Regret for Tabular MDP
- 5 Rethinking Bellman equation
- 6 State Feature Map
- 7 Representing value function using linear combination of features
- 8 Reducing Bellman equation using features
- 9 Sample complexity of RL with features
- 10 Learning to Control On-The-Fly
- 11 Episodic Reinforcement Learning
- 12 Hilbert space embedding of transition kernel
- 13 The MatrixRL Algorithm
- 14 Regret Analysis
- 15 From feature to kernel
- 16 MatrixRL has a equivalent kernelization
- 17 Pros and cons for using features for RL
- 18 What could be good state features?
- 19 Finding Metastable State Clusters
- 20 Example: stochastic diffusion process
- 21 Unsupervised state aggregation learning
- 22 Soft state aggregation for NYC taxi data
- 23 Example: State Trajectories of Demon Attack