Overview
Explore a neurally plausible model that learns successor representations in partially observable environments through this in-depth video analysis. Delve into the intersection of model-based and model-free reinforcement learning strategies, focusing on how animals devise strategies to maximize returns in noisy, incomplete information settings. Examine the concept of distributional successor features and their role in efficient value function computation. Discover how this model supports reinforcement learning in challenging environments where direct policy learning is impractical. Investigate the neural response features consistent with the successor representation framework and their implications for understanding animal behavior and decision-making processes.
Syllabus
Introduction
Reinforcement learning
successor representations
value functions
continuous space
distributional coding
wake and sleep
mu
Taught by
Yannic Kilcher