Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the concept of using language abstractions to improve intrinsic exploration in reinforcement learning through this in-depth video explanation. Dive into the challenges of sparse reward environments and how language descriptions of encountered states can be used to assess novelty. Learn about the MiniGrid and MiniHack environments, and understand how states are annotated with language. Examine baseline algorithms like AMIGo and NovelD, and discover how language is integrated into these methods. Analyze experimental results and consider the implications of using language-based variants for intrinsic exploration in challenging tasks. Gain insights into the potential of natural language as a medium for highlighting relevant abstractions in reinforcement learning environments.
Syllabus
- Intro
- Paper Overview: Language for exploration
- The MiniGrid & MiniHack environments
- Annotating states with language
- Baseline algorithm: AMIGo
- Adding language to AMIGo
- Baseline algorithm: NovelD and Random Network Distillation
- Adding language to NovelD
- Aren't we just using extra data?
- Investigating the experimental results
- Final comments
Taught by
Yannic Kilcher