Overview
Syllabus
Intro
Markov decision process
What does a sample mean?
Complexity and Regret for Tabular MDP
Rethinking Bellman equation
State Feature Map
Representing value function using linear combination of features
Reducing Bellman equation using features
Sample complexity of RL with features
Learning to Control On-The-Fly
Episodic Reinforcement Learning
Hilbert space embedding of transition kernel
The MatrixRL Algorithm
Regret Analysis
From feature to kernel
MatrixRL has a equivalent kernelization
Pros and cons for using features for RL
What could be good state features?
Finding Metastable State Clusters
Example: stochastic diffusion process
Unsupervised state aggregation learning
Soft state aggregation for NYC taxi data
Example: State Trajectories of Demon Attack
Taught by
Simons Institute