Overview
Explore the concept of Neural Interpreters in this comprehensive video featuring an interview with the paper's authors. Dive into a novel approach that treats deep networks as modular programs, combining recurrent elements, weight sharing, attention, and more to tackle abstract reasoning and computer vision tasks. Learn about the model's architecture, including interpreter weights, function codes, and neural type inference for data routing. Understand ModLin layers, experimental results, and the potential for systematic generalization. Gain insights from the authors on the general model structure, function code and signatures, modulated layers, and weight sharing. Discover how Neural Interpreters perform in image classification and visual abstract reasoning tasks, demonstrating their versatility and potential for capacity extension after training.
Syllabus
- Intro & Overview
- Model Overview
- Interpreter weights and function code
- Routing data to functions via neural type inference
- ModLin layers
- Experiments
- Interview Start
- General Model Structure
- Function code and signature
- Explaining Modulated Layers
- A closer look at weight sharing
- Experimental Results
Taught by
Yannic Kilcher