Overview
Explore distributional reinforcement learning in this 23-minute video lecture from Pascal Poupart's CS885 course at the University of Waterloo. Delve into key concepts including return distribution, policy evaluation, convergence, and the Bellman equation. Examine the C51 (Categorical DQN) algorithm, its advantages, and its performance on Atari games. Gain insights into various distributional representations and their applications in reinforcement learning. Access accompanying slides from the course website to enhance your understanding of this advanced topic in machine learning and artificial intelligence.
Syllabus
Outline
Objective
Distributional RL
Return Distribution
Policy Evaluation
Convergence
Bellman Equation
C51 (Categorical DQN)
Advantage
Atari Results
Distributional Representations
Taught by
Pascal Poupart