The Sensory Neuron as a Transformer - Permutation-Invariant Neural Networks for Reinforcement Learning - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Overview
Explore a comprehensive video analysis of Google Brain's paper "The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning." Delve into the innovative "Attention Neuron" model, designed to handle arbitrary permutations of input observations in reinforcement learning. Gain insights into permutation invariance versus equivariance, understand the implementation of permutation invariance using the Set Transformer concept, and examine interactive demos. Review the results, including a Pong occlusion experiment and visualizations of representations using t-SNE. Discover the robustness of the Attention Neuron and observe visualized attention patterns. Conclude with a comprehensive recap of this groundbreaking approach to neural network architecture in reinforcement learning.
Syllabus
A high-level overview, main ideas
Permutation invariance vs equivariance
PI implemented via the Set Transformer idea
Interactive demos blog
Results
Pong occlusion experiment
Representations visualized via t-SNE
Attention Neuron is robust, attention visualized
Outro, recap
Taught by
Aleksa Gordić - The AI Epiphany