Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

A Neurally Plausible Model Learns Successor Representations in Partially Observable Environments

Yannic Kilcher via YouTube

Overview

Explore a neurally plausible model that learns successor representations in partially observable environments through this in-depth video analysis. Delve into the intersection of model-based and model-free reinforcement learning strategies, focusing on how animals devise strategies to maximize returns in noisy, incomplete information settings. Examine the concept of distributional successor features and their role in efficient value function computation. Discover how this model supports reinforcement learning in challenging environments where direct policy learning is impractical. Investigate the neural response features consistent with the successor representation framework and their implications for understanding animal behavior and decision-making processes.

Syllabus

Introduction
Reinforcement learning
successor representations
value functions
continuous space
distributional coding
wake and sleep
mu

Taught by

Yannic Kilcher

Reviews

Start your review of A Neurally Plausible Model Learns Successor Representations in Partially Observable Environments

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.