Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Reinforcement Learning via an Optimization Lens

Simons Institute via YouTube

Overview

Explore reinforcement learning through an optimization lens in this 47-minute lecture by Lihong Li from Google Brain. Delve into the fundamentals of reinforcement learning, including Markov Decision Processes, Bellman equations, and the challenges of online versus offline learning. Examine the intersection of Bellman and Gauss in approximate dynamic programming, and investigate a long-standing open problem in the field. Discover how linear programming reformulation and Legendre-Fenchel transformation address difficulties in solving fixed-point problems. Learn about a new loss function for solving Bellman equations and its eigenfunction interpretation. Conclude with practical applications using neural networks in a Puddle World scenario.

Syllabus

Intro
Reinforcement karning: Learning to make decisions
Online vs. Offline (Batch) RL: A Basic View
Outline
Markov Decision Process (MDP)
MDP Example: Deterministic Shortest Path
More General Case: Bellman Equation
Bellman Operator
When Bellman Meets Gauss: Approximate DP
Divergence Example of Tsitsiklis & Van Roy (96)
Does It Matter in Practice?
A Long-standing Open Problem
Linear Programming Reformulation
Why Solving for Fixed Point Directly is Hard?
Addressing Difficulty #2: Legendre-Fenchel Transformation
Reformulation of Bellman Equation
Primal-dual Problems are Hard to Solve
A New Loss for Solving Bellman Equation
Eigenfunction Interpretation
Puddle World with Neural Networks
Conclusions

Taught by

Simons Institute

Reviews

Start your review of Reinforcement Learning via an Optimization Lens

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.